Abstracts

CSM #1 Secure Communications

There exists a need to pass documents electronically securely through insecure mediums like the Internet. We are implementing a prototype system to securely send messages through email by encrypting messages and associated information within images. The client will manage user-owned stamps, encrypting the messages within the stamps, send the stamps to other clients, and validate the authenticity of received stamps. The server will track the ownership of stamps, provide certification information for client received stamps, and track stamp transactions between users.

A primary goal of this project is to create a series of benchmarks to ensure that the overall system is scalable and efficient. If we have time, we will implement a stamp gallery for users to look at the stamps in their possession. This prototype will be implemented for the Windows platform, but with an emphasis for creating platform independent code when possible.

CSM #2 Quantum Physics

The goal of this project is to develop a MatLab software package that when given a prescribed basis, can approximate strict upper-bounds on the ground state solutions of nonlinear, time independent Schrodinger equations for arbitrary potential functions, and an algorithm that will be used to minimize the functions ground state energy with respect to the coefficients parameters.

Initially, the set of approximating equations will use Hermite polynomials as an eigenbasis. Other eigenbasis sets will be investigated such as sine/cosine and Legendre functions. algorithm will incorporate the principle of least action to determine the coefficients of the eigenbasis being used. Following the algorithm evaluation, the results will be reported numerically and graphically, and then compared to empirical data and assessed for accuracy. Time permitting, multiple physically motivated cases will also be analyzed, and a graphical user interface will be developed.

CSM #3 Regional Climate Change

Using the statistical software package R, our goal is to design and implement a web interface for a user to visualize regional climate change under different assumptions through use of an interactive map and user-defined parameters. The model is constructed through a Baysian analysis of climate data collected from all over the world and compiled in the IPCC (Intergovernmental Panel on Climate Change) report.

Interacting with the web interface, the user inputs the parameters to produce the desired model output. If the user chooses an aggregated output, the web interface pulls the IPCC data from a local database and passes both the user parameters as well as the IPCC data to a program. Due to the length of time it takes a large computer to run a spatial output, if the user chooses to produce a spatial output, the web interface pulls values from a database of precalculated results (again based on the IPCC data). Both outputs produce a data file and appropriate figure or plot. The web interface provides an option for the user to export the data files, figures, and plots in a variety of
formats.

CSM #4 Java Tutorials for S/W Engineering

During a study of student behavior during lab sessions, a team of two students, Matt Gimlim and Stuart Fehr, realized that switching between reading the Java textbook tutorials and performing the tasks on the computer caused students to become less productive. This observation created the need to devise a more efficient more efficient tutorial system on the computer to help students maintain their focus during lab sessions. The suggestion given by the team of students was to create an Eclipse plug-in with tutorials included so that students could have the tutorial in the same window as their labs. Some features of our project are:

CSM #5 BPC Tech Club

The goal of this project is to create interest in computer science among female high school students. We will create fun and interesting activities that will introduce students to computer science in an after-school technology club.
The project actually has two separate aspects to it: the first is to create a small animatronic creature and the second involves a small robot and wireless sensor.

The animatronic creature will be small, inexpensive, and remote controlled. The creature will be able to move forward and backwards, have moveable arms, and will be capable of expressing some rudimentary emotions. After its completion we will film it in front of a green screen and add some digital effects using Blender, an open source 3d modeling/animation program.

The second project uses a Scribbler robot, a small programmable robot, and a Sun Spot. The Scribbler is a relatively inexpensive programmable robot with a variety of sensors. Sun Spots are small wireless sensors that can be coded to perform a variety of tasks. The Scribbler comes with eight factory demos loaded by default which range from simple tests of its sensors to more advanced programs like room exploration. We will re-create these demos in the provided graphical programming language of the Scribbler and write tutorials on how to write these programs. The plan is for these tutorials to be a starting point for students to write more advanced programs on the Scribbler. The final component of this project involves combining the Scribbler and Sun Spot. The Scribbler will explore a room and the Sun Spot will track any changes in temperature. A Java applet on a nearby computer will turn this data into a rough thermal map of the room. There will also be a tutorial on how to run this applet.

 

Data Verity #1

DataVerity is a consulting firm that helps banks make decisions on how to increase revenue by studying sales and referral data. The consultants use a web-based system called ESP (Elevated Service Performance) which collects and presents data which is critical to the recommendations DataVerity offers. The consultants would like to improve the quality and confidence of the advice they provide to their clients by adding the ability to investigate relationships between sales decisions and actual sales performance.

We will begin improving the ESP system by designing database procedures that will generate statistical data such as correlations, linear regressions, and confidence intervals. Once this functionality has been added, we will enhance the existing interface to include this information.

Data Verity #2

Data Verity is a data analysis company. Their objective is to provide various data reports for their clients. Their current online user interface is non-intuitive and hard to navigate, it was developed with a minimal knowledge of programming and as a result uses bad programming principles.

Our objective is to redesign the interface, making it more dynamic and user friendly. We will achieve this by using the extJS javascript framework. This framework will allow us to build a web-desktop as an interface for the clients to manage their data and reports.

Los Alamos #1 Consistent HPC Environment

The Los Alamos National Laboratory has several computing clusters with different environments. Georgia A. Pedicini approached Mines for a field session project to help collect data about the user environments on these clusters, organize it, and display it on a web page.

The purpose of this project is to develop a system of tools that collects software version information and displays it as a webpage. The web page will be used to easily find inconsistencies on a cluster, and to help schedule software updates. The web page should be easy to use and should have two levels of access: one for users to see what is installed, and one for sysadmins to keep track of inconsistencies and comments from the ptools staff.

Part of the project is to check for consistency between installed modules, and installed packages. Modules are the actual programs that users interact with and contain their required libraries. Packages are a bookkeeping method to record which versions of software are installed. Packages are also known as Redhat Package Manager files (RPMs).

The website we design will display software versions that are installed on every machine and indicate inconsistencies between and within clusters. Possible extensions to the project include: cross referencing the list of modules with the packages installed to verify proper installation, verifying whether libraries are properly installed on the slave machines, detecting software that does not have an explicit default version, notifying the ptools team by email when problems occur, and allowing the ptools team to leave comments on the website regarding software conflicts.

Los Alamos #2 Parallel I/O Performance

Los Alamos National Labs is a major player in the high performance computing world. They have several large computer clusters that are used for widely varying projects. All of these projects use the clusters for massively parallel computing tasks. One of the most important tasks for the programs is to store the information computed. This is done using the I/O capabilities of an MPI2 standards-compliant library.

However, different projects use different parts of the I/O facilities. Los Alamos National Labs is looking to improve its parallel I/O testing library. Currently, the tests consist of a single program that uses a fraction of the MPI I/O function calls. The MPI I/O tests need to be expanded and improved. The purpose of the tests is to check the integrity of the MPI I/O libraries in use. A test file will be created and written out to disk. After a check to make sure that the write was successful, the file will be read back in. Upon read, two tests will be performed. First, the return value of the function will be checked for error. Second, the integrity of the file will be checked. This basic testing pattern will be used with several different methods of writing files.

The first tier of the project is to get these tests to report success or failure of the I/O operation. This is used to test the general integrity of the MPI I/O installation. The second tier is to have the test programs record performance information. This will be used to assess any changes to the MPI I/O framework over time.

Los Alamos #3 I/O Traces

Los Alamos National Laboratory is a leading national security research institution that provides scientific and engineering solutions for the nation's most complex problems. Their focus is primarily on safety, security, and reliability of our nation’s defenses. Los Alamos National Laboratory also works to advance the fields of computer science, bioscience, material science, chemistry, and physics.

In order to support these large research efforts LANL has develop some of the world’s most powerful super computing clusters. Their performance and reliability is key to supporting their efforts. Our project aims to provide a visualization solution to help analyze the I/O of these large computing clusters. Because much of the data is classified, this visualization solution will provide a way for people outside of LANL to analyze the patterns and performance of the large computing cluster.

Spatial

Test Center Summary Project

Spatial Corp provides high-performance 3D software components to be used by Computer Aided Design software. This software is provided across many platforms that use many different compilers.

The software is tested with a tool named Test Center. Test Center currently displays the results of tests in a table that is organized similarly to the structure of the testing system. The goal of our project is to create a new view that organizes the test results independent of compiler. This will allow developers to better identify the source code causing the failure.

Implementing the new view will require changes to the Test Center results page as well as a new database query and anything needed to connect them.

This will include the:

Sun #1

OpenSolaris is the open-source operating system from Sun Microsystems, Inc. As a relatively new, and still (rapidly) growing project, there is a lot of room for the development of complimentary programs; as such, the need for a web-based custom ISO creation tool arose. This will allow end-users a major increase in control over the packages they will have available while installing from the resulting live image. (As with Ubuntu, etc., it is intended that the user will then burn the resulting disc image to a CD and then use the included installer to actually install OpenSolaris.)

Generally, the tool will take user selections, and provide them as input to the
distribution constructor - a preexisting piece of software used to generate an ISO. By using already-established repositories, the user should have access to a number of different configurations; at minimum, different groups of applications will be selectable, though we hope to enable more fine-grained control should time allow.

Sun #2: OS Port

The bounds of this project aren't well known. The end goal of this project is to have OpenSolaris (project Indiana) running on the SPARC machine architecture. Because the kernel code should only at most be recompiled between the x86 and SPARC architectures, the effort of this project will be put into writing a new secondary boot program to help the kernel onto its feet. From what we've gathered, on (Open)Solaris - under SPARC - the four following independent stages occur:

OBP - OBP runs POST, creates device tree, loads and executes a BOOTER from disk or network

BOOTER - This loaded BOOTER will read the boot archive off the specified device, into memory, creating the RAMDISK, and execute it. This stage requires knowledge of the filesystem (e.g., zfs). Note: The boot archives are the same across x86 and SPARC.

RAMDISK - this is the boot archive containing either kernel modules or an installation mini-root. The RAMDISK will proceed to extract the KERNEL image from the archive to execute it. Note: This RAMDISK has a filesystem format of its own. Neither the kernel nor the booter needs to know its format.

KERNEL: After it has been executed by the RAMDISK it will unpack the rest of the modules required off the boot archive. Once complete it will then mount the real root filesystem and unmount the boot archive. Note: This final stage is the same on x86.

 

Toilers #1: Visualizer for Event Tracking

Simulations are a great way to test a project. They can be quickly run many times, to do things like test a piece of software or to see how a variable can affect a system. Most simulations have an output of a text file trace of what happened, and maybe some final results. A visualization tool is then used to transform this output into a visually presentable, easy to understand format, such as an animation.

My task is to create a visualization tool for Nick Hubbell's sensor network simulation. The visualizer will change the results from Nick's simulation into an animated node graph. This graph needs to show a connection between each node and its parent, representing data transfer. Also, each node needs to be colored, to show which chemical that node is sensing. Along with this, an image needs to be displayed behind the graph for each simulation step, showing the extent of the chemical plume. This might be a gradient of the nodes color, with a darker color showing higher density. Lastly, each node needs to be able to be inspected, i.e. when a node is clicked on, a drop down list will show data about that node. This drop down needs to be capable of collapsing, so that the user isn't assaulted with information.

To do this, I will extend a tool called iNSpect. This tool is normally used to visualize wireless network simulations. This will include extending iNSpect's main classes, writing an interpreter to input Nick's file format, and changing the way iNSpect renders the graph.

It is also possible to write the visualization tool from scratch. This option will be considered as well.

Toilers #2

The Toilers group is working on a system to sense environmental data, specifically the flow of contaminants in groundwater, and to use this information to create models which will predict future contaminant flow. Our group has been asked to integrate and update several existing systems and code modules, and to write new code, to collect and transmit data from underground sensors to a base station in an efficient manner.

Firstly, the group has been asked to finish porting existing code from TinyOS version 1.x to version 2.0, as libraries have been written which will not work under older versions of the operating system. This has been mostly completed; all that remains is to import a multi-hop data collection system. Also, we should implement an event-driven data transmission system which will only send "interesting" data to the base station; motes should not waste energy sending data if the sensor readings are within certain predicted levels. Along with this event-driven system, we are asked to implement time synchronization between the motes, so that the time a reading was taken is sent to the base along with the reading. Finally, we will create a new user interface which will display the readings from sensors and send commands to them.

Toilers #3

A network simulator visualization tool named iNSpect is widely known and used by over 420 researches in 51 different countries to visualize wireless networks, mobile ad-hoc, and/or soon including wired networks. iNSpect, currently on version 3.5 was originally developed by the Toilers Research Group at the Colorado School of Mines which includes Tracy Camp, Stuart Kurkowski, Mike Colagrosso, Neil (Tuli) Mushell, Matthew Gimlin, Neal Erickson, and Jeff Boleng. In April 2005, iNSpect was upgraded to handle wireless networks by Fatih Gey and Peter Eginger from the Department Security Technology at the Fraunhofer Institute for Computer Graphics Research providing a cleaner and logical class structure so it may be easier to extend its visualization tool.

In November 2007, the Toilers group decided to update iNSpect to handle wired and wireless visualizations while working in the new version NS-3 in hopes that it would be included with NS-3 distributions released in the future. To do this, the group has added simulation support for wired networks along with the wireless networks. redesigned the internal model to handle these simulations, and added functionality to turn on and off nodes during simulation.

To aid the Toilers group in updating iNSpect, the team's goal is to make iNSpect more maintainable and more functional, making the Toilers group manage the program more efficiently. This involves completing the Model-View-Controller (MVC) architecture, remove the Singleton model, use the waf build system, add mobility and NS-3 parsers, add background images back into the program, make threaded file readers, and continue to add fail-fast functionality (preventing any assumptions).

The team will be working mostly in the Alamode Lab at Colorado School of Mines, using eclipse to edit and learn the iNSpect source files which are in C/C++. The team will also use knowledge in Python and UNIX shell scripts/commands in a UNIX environment.