Computational Nanotechnology Group

Compiler optimizations in the scalar world have, many researchers believe, run their course. We're squeezing about as much performance as we can out of a single processor, at least in terms of compiler techniques. The result is that the most active area of compiler research addresses parallel computer architectures, at both the fine-grained or instruction level and at the more coarse-grained or distributed level. One observation is that many of the existing compiler and optimization techniques can be adapted to benefit parallel architectures, sometimes with unexpected benefits. The focus of my research follows this general trend.

Project: Nanocompilers

Purpose: Development of a nanocompiler for a hypothetical nanocomputer, which involves proposing such a nanocomputer.

Researchers: Tom Way, Sateesh Venkata, Purushotham Ch

Research alumni: Bryan Wagner (nanogates), Danielle Manning (nanocomputer design), Jim Price (quantum computing), Tao Tao (nanogates in CPN), Rushikesh Katikar (Cell Matrix)

Applications: Future directions of computer architecture, compilers and how computers are used to solve problems


An exciting and highly interdisciplinary technology called Nanotechnology is just now reaching the earliest stages of practical application.  It is at the stage in its development roughly analogous to World War II and computer technology.  In a mere 60 years, computers that once took up whole rooms now fit in a pocket.  And the pace of advancement has quickened as we have become more adept and developing it.  It is predicted that Nanotechnology, also known as Molecular Manufacturing, will be the next major breakthrough in manufacturing technology.  If forecasts are correct, it will be bigger than the Assembly Line, the Printing Press and the Internet put together!  And we will definitely see it mature during most of our lifetimes!

Nanotechnology is about building things from the atom up.  Imagine taking the raw elemental materials (atoms of carbon, oxygen, hydrogen, etc.) and constructing molecules from these, and mass-producing all of the needed molecules in just the right proportions, positions and varieties to make a new toaster-oven or baseball or 100-foot TV screen or an unlimited supply of flu vaccine or protein bars or toilet paper.  Since all things are various combinations of atoms pieced together in just the right way, if we can master just how to do the piecing-together part, we can theoretically build mass quantities of just about anything very cheaply!

However, that's "crazy talk."  Much as Artificial Intelligence was over-promoted during its earliest days as the next "killer app," where our computers would become smarter than us and do all of our work for us and talk to us and understand, it is for certain that Nanotechnology will not immediately cause a free and limitless supply of all variety of things.  But Nanotechnology will do a lot, helping to engineer new medicines, materials and computers that we have no other way now to create.

Leading the field of nanotechnology is K. Eric Drexler, founder of the Foresight Institute.  Dr. Drexler wrote the seminal document on the field, Engines of Creation, which jump-started a science that had been written about and forecast for years by such luminaries as Ralph Merkel, Marvin Minsky and Nobel laureates Richard Smalley and Richard Feynman, among others.  The work of these people inspires my research into a small part of an amazing new field of interdisciplinary science.

Our research in this area involves the development of a compiler for a hypothetical nanocomputer.  A nanocomputer is distinct from a typical computer because it is created at the time it is needed based on what it is needed for.  The computer program will determine what the nanocomputer looks like, so a "nanocompiler" will translate a human-written computer program into the instructions to first build the corresponding computer, and then run the program on this custom-built computer.  A nanocomputer can be massively parallel in design, having many thousands, millions, or even billions of processors to handle the computation. It can also be reconfigurable, either at compilation time (old-style) or at runtime (new?), similar to the area of Field Programmable Gate Array (FPGA) computer architectures.

The nanocompiler we are developing is distinct from the "nanocompiler", or "assembler", which is the molecular-scale mechanical device that translates assembly instructions for a nanoscale object of some sort into the object itself using the raw atoms and molecules as building blocks.  Our use of the term "nanocompiler" is particular to the computer science field of compilers, the computer programs that translate a human-readable or source code program into the binary code that runs on a computer.  We would love to avoid future confusion by selecting a better term, so part of our research will be to find the best term to express the goals of my research.

Major issues in building a nanocompiler include identifying the requirements of the target computer (functional units, memory architecture, instruction set, etc.), and a major component of that involved recognizing, or even generating, parallelism within the program to make more efficient use of such a flexible technology. As with much of compiler research, the precise details of the target computer are less important than its general characteristics... but even the general characteristics of a hypothetical nanocomputer, as well as the specifics such as carbon nanotube logic gates, are so interesting and yet poorly understood, that this should be a very rewarding and enjoyable area in which to conduct research!

Research subprojects:

  • Compilation for Reconfigurable Nanocomputers - Assuming the potential for flexible reconfigurability of nanocomputers, including numbers of functional units, memory structure and hierarchies, registers, etc., this project explores the development of a compiler that analyzes source code to generate an optimal-as-possible architecture for that code.  Issues to explore include direct hardware implementation of code fragments versus more general-purpose processor design, flexible ISA (and therefore microarchitecture) design, functional unit replication, parallel pipelining, etc.  This project involves surveying the area of compiler analysis for machine-specific applications, and the implementation of a static (and possibly also a dynamic) analysis and machine-description-generation module for an existing research compiler framework, conducting experiments, and writing them up.
  • Nanocomputer architecture - What are likely organizations of a nanocomputer? What kind of memory, processor(s), communication, parallelism, reconfigurability, low-level technology, chemical and biological and physics concepts, compiler techniques, instruction set, etc. will it use? This project involves surveying the literature in the subject area, and crafting (on paper) a nanocomputer, explaining what it might look like an why. It also involves construction of models using Colored Petri Nets, NanoEngineer, or other tools to further quantify and explore our ideas.
  • PRAM modeling - Develop a set of Colored Petri Net (CPN) models that can simulate the theoretical PRAM, which is useful for parallel algorithm study and research. If realistic constraints on the PRAM models can be included, it may be possible to use these models as the hypothetical nanocomputer architecture. The idea would be to analyze a program and emit the appropriate PRAM on which the program would be run.
  • Parallel architecture simulator - Find the best, most flexible, free & open source compiler & simulator framework, install it on our Linux research machine (Links: Trimaran (here), Proteus (here), Internet Parallel Computing Archive).  We already have Trimaran installed and running, and it might be sufficient, but we want to be sure.  The "best" system will allow a machine to be generated after analyzing the source code that will be run on the new machine, and then that machine description can immediately be used to simulate the new machine as it runs the compiled source code program.  I think Trimaran can do this.
  • Reconfigurability - What are the unique capabilities that only a nanocomputer, one that is potentially reconfigurable at run-time and compilation-time, might exhibit?  Does a nanocomputer hold this potential, and if so, what would it be like?  What issues does it raise that need to be solved?  This project involves surveying the subject, exploring how nanotechnology might enable more powerful reconfigurable computer architectures (sort of FPGA on steroid), and to propose how such an architecture might work, pointing out its potential strengths, weaknesses and applicability.
  • Automatic Parallelization - Since a nanocomputer is likely to be flexibly configurable, and very likely to be parallel (and perhaps massively parallel), the classic parallelizing compiler research can be dusted off and brought up to date, and then applied to this new form of computer architecture.  This project involves surveying the subject area to see what has been done before, proposing an application to a hypothetical nanocomputer, and perhaps implementing an automatic parallelization component within an existing research compiler framework and conducting experiments.

Current activities:

  • Spring 2009 - Use NanoEngineer or Processing to construct a working model of a NAND gate (or other logic gate). First, model Drexler's NAND gate, then model other approaches. Write up for submission to a conference as "Design of a nanoscale NAND gate using NanoEngineer". Also, update, revise and submit "Compiling Mechanical Nanocomputer Components" from a pervious journal submission, or make use of its results in a current paper.
  • Summer 2009 - Determine how to programmatically generate NanoEngineer or Processing models where NAND gates are connected into basic computational components, such as an adder. Explore the use of a compiler to generate these from a source code input program. Write up and submit.
  • Summer 2009 - Extend Rushikesh's Cell Matrix research to implement complete compilation of a small C program (e.g., matrix multiplication) to a Cell Matrix configuration. Then, get it to work on a few more small C programs. Gather compile-time and run-time statistics, write-up, submit to conference or journal (tbd). Investigate what it would take to extend this to Java input in the future, as well.
  • Fall 2009 - Investigate more deeply source code analysis that generates a profile of needed resources for a given program (memory: ports, size, cache arch, local, shared, distrib; registers: number, ports, temp; ISA: common instr sequences, meta-instr creation, dynamic machine opt). Implement using tools tbd, with either C or Java source input, and try to run as many "real" benchmarks through analysis. Write-up and submit.
  • Spring 2010 - Design of programming or assembly language for nano-assembly (in this case, of nanocomputers), essentially an assembler ISA, using MolML or other molecular assembly language as a foundation, determine what NanoEngineer uses. Goal would be to go from a high level object description language to the low-level assembly instructions to molecularly assemble the object.

Mechanical Logic Gates:




Publication resources:

Reconfigurable architecture links & papers:

Nanotechnology papers and other links:

PRAM info

FPGA research and info

  • Altera - University program education kits (link), FPGA kits
  • Xilinx - FPGA kits
  • Impulse - C programming tools for FPGA
  • Mitrion - compiler for simulation of CPU on FPGA so CPU-targeted app can run on that FPGA (article)

Compiler research:


  • Identify, specify, describe a reconfigurable nanocomputer
  • Identify compiler issues for nanocomputer
  • Design compiler module, theory, etc. - what are the phases of compilation?
  • Experimentation, compare with other HPC architectures (implementation on Trimaran platform)

Traditional compilation

  • Lexical analysis (scanning)
  • Parsing
  • Semantic analysis
  • Optimization
  • Code generation

1st generation compilation using a Nanocompiler

  • Traditional compiler: Scan, parse, semantically analyze source program
  • Parallelizing compiler: Analyze code, identify opportunities for parallelization & generate optimal machine description
  • Nanoassembler: Translates machine description into custom nanocomputer that implements source program, or on which the source program can be run

2nd generation compilation using a Nanocompiler

  • Perform 1st generation compilation
  • Run the program
  • During runtime, continuously analyze computation and adapt machine dynamically to improve performance

updated: 02/10/09