Skytopia > Projects > Technology/science articles > The Top 20 Applications for an Infinite Fast Computer (article first created on 5th July 2009).

GO TO FIRST PAGE... (1)

Page 1 | 2 | 3


The Top 20 Applications for an...


6: Models for a universal theory of nature



Courtesy of NASA. Unraveling the secrets of dark matter/energy may be the missing key we need to form a grand unified theory of everything. Alternatively, they may help us to achieve tastier ice cream.
Although one of the problems in finding a "Grand Theory of Everything" is the lack of data (e.g. we don't know certain things that happen at subatomic scales), the other large obstacle is in finding the perfect set of equations to match all known data. Distributed projects such as Cosmology@Home may find some answers, but using a Brute Force search along with an infinitely fast computer bypasses the second problem by enumerating all possible theories, and filtering them down to the most elegant (shortest?) ones. It's not guaranteed we'd find the universal theory, since a never-ending series of approximations may be required (each converging to the 'truth' but never quite reaching it). But if nothing else, one can see how many possible competing theories could exist!

Finding something resembling a Theory Of Everything would probably allow many advances in engineering just as the discovery of relativity led to better materials, fission and GPS, or how the discovery of quantum mechanics led to the laser and microchip. We could then see the limits to space travel and know for sure if faster-than-light travel (through wormholes etc.) is attainable. The big question about exactly how the universe began (and even what came before that) can be answered once and for all.



5 : Graphics (end user)


Created by Gilles Tran. It'll be a while before we start seeing this kind of quality in video games. For the classic example of global illumination in action, see the stunning video by Henrik Wann Jensen.

For video games, much discussion has taken place over the potential of using raytracing over traditional rasterization. The former of course can look better and eventually ease graphics development, but rasterization is a mature field (including all of its tricks and hacks), and it does allow faster speeds if the polygon count is relatively low. Processors will need to progress further before raytracing will become viable.

But the type of raytracing that's being discussed is a far cry from what will eventually happen. A concept so grand that it will change the face of 3D gaming forever...

Global Illumination.

In comparison to games with global illumination, 3D worlds with only direct illumination look crude and unconvincing. Objects appear 'cookie-cutter' like and don't gel with the overall 3D landscape. Thanks to the kludge of ambient lighting, shadows often look flat, since they have no indirect lighting from other objects. Colours don't bleed between objects, again making them look out of place from the world they attempt to reside in.

In a globally illuminated world, every object is its own 'light source'. There's no need to use fake ambient lighting or needlessly use multiple light sources to compensate. Features such as soft shadows, caustics, and specular reflection are bonuses that will come as standard with GI, which also help to provide nicer shadows, spectacular water, glass, and translucency/crystal effects. But perhaps it's the overall indirect lighting between objects which will most transform the look of games.

Developers can also go overboard with sub pixel sampling, sub-surface scattering (for more realistic and glowy materials), properly curved surfaces (as well as polygons), atmospheric effects such as volumetric lighting, and utilize trillions or more polygons/B-Splines in realtime. All video can be made super-smooth too (500 frames per second - approaching the limit of perception).

In reality, the state of the art is not quite there. Toy Story 2 took from 2 to 20 hours per frame or five hours to render each frame on average. To render in real-time for a video game (say 60 FPS), you would need a processor that was just over 1,000,000 times faster than what we have today). And that's mostly using Reyes rendering (which incorporates mostly rasterization techniques with only minimal ray tracing).

The good news is that we probably won't have to wait forever for some of this. Raytracing and even global illumination is slow, but it's not exponentially slow. Technology is slowly beginning to produce ray-traced graphics in realtime. Next stop, path-tracing please.




Created by Patrick J. Lynch (CC-BY 2.5).
Less brain power will be required when coding in ultra-high level languages, and hence we can get away with writing more sloppy readable code. As a side effect, we can emulate the brain itself, and get that to do the sloppy tedious code writing for us.

4: Rapid software development


In programming, there's often a balance to strike between readability/maintainability/modularity and the speed of code. That would all change though with an infinitely fast processor. All code would be written with the former in mind with little or no regard to efficiency. There would be no need, and so software would be much quicker and cheaper to develop. Lower level languages such as assembler, C or even Java can go for walkies. Instead, something more BASIC, Ruby, or Python-like would be the future. Or maybe something more declarative such as Prolog should be used, where one defines the outcome rather than the steps needed to achieve the outcome.

One example of simpler algorithm development would be sorting. Suddenly, Bubble Sort would start to make sense. Actually scrap that, Bozo sort, one of the archetypes of bad sorting, could now be the one to go for.

Also, we can stop wasting our time improving the efficiency of previously slow algorithms. For example, the Barnes-Hut simulation algorithm can be thrown out in favour of brute force N-body simulation. All techniques which provide analytical solutions can now be evaluated numerically through sheer Brute Force, and we can finally lift the curse of dimensionality.

As a bonus, we can skip the fierce scientific debate about whether developing Metaheuristics are a waste of time *. ;)

In terms of programming animation/video, we can forget pixels and frames per second completely, and instead think in terms of time and screen proportions.

* <Begin Controversial Statement> (Alternatively, one could compare random or Brute force search with the success of say... genetic algorithms, and solve the debate that way - free lunches are best eaten hot) <End of Controversial Statement>.



3: Physics and particles (for entertainment purposes)



From the World Of Goo game.
Special effects in films would become ever more spectacular. But let's concentrate on games for this section. Physics based gameplay won't be restricted to pinball or sports. In fact, we can go beyond static polygons, and build up our world from trillions of individual atoms to allow for realistic simulation of effects such as water, explosions and air flow. Indeed, games such as World of Goo, Little Big Planet, Hydrophobia or Crysis with its 3000 barrel explosion reflect some of the changes taking place in this scene. However, you can bet that last one is not running in realtime (about 250x speed as of the time of writing is needed for that!).

With this sort of game engine, expect hyper realistic and interactive world effects such as liquids, bridges, explosions, weather, breakable surroundings, but also many strange and novel visual scenes and gameplay styles such as the manipulation and interaction of semi-liquid jelly-like objects, monopole magnets, unusual explosion effects, reverse black holes, matter conversion, and other such madness. Games can feature bizarre stories such as a battle between Blue Goo to prevent Grey Goo from eating everything in sight, and be realistic if need be. We can redefine the laws of physics itself to our whim.

Finally, using atoms and molecules as a basis for virtual reality and games allows calculation of realistic sounds (instead of prerecorded). Only recently has there been an attempt to model realistic sounds such as a dripping water tap. The model isn't perfect due to the complexity of the problem. You can imagine how complicated the acoustics of an ocean may be in comparison...





[source: Matthias Süßen (CC-BY-SA 2.5)]
There's a nice scenic picture, with a graceful mist in the distance.

Except it's not fog at all, but rather airborne Grey Goo, consuming everything in its path! If AI ever reaches its most advanced stage, there's a tiny chance the Grey Goo will eat the world. But that's not going to happen really is it?

2: Artificial Intelligence


We're in more speculative territory now, but according to Ray Kurzweil, computers should start to match the speed of the human brain by around 2030 (around 10,000 trillion calculations per second). At that point, we may be able to let humanoids do our housework, and at some point after that, even attain the singularity itself.

It's possible all this may happen of course, but unless we can create true artificial intelligence (which is an open philosophical question), what we would really need is to define an incredibly precise and encompassing 'fitness function'. Without knowing the exact attributes of what is wanted, our infinitely fast computer may otherwise wander aimlessly in infinite space.

In addition, the computer's potential inability to understand aesthetics, or even what even makes a good piece of music, may prevent using this bombshell to automatically and easily create a future paradise, never mind cure the human condition of unhappiness generally.

But if we can create or evolve some kind of true AI which has a sense of desire and what's 'good', or at least one which has access to almost every tiny bit of knowledge that exists, then it's possible that a utopia or technological singularity could emerge. Whether it would instead open Pandora's box is anyone's guess.

For the time being, we'll have to make do with simulating a rat's neocortical column.

Because of the speculative nature, AI just missed out on reaching the top spot which goes to...


1: Physics and particles (for scientific/engineering purposes)



[Source: Universesandbox.com] What happens when two galaxies collide?
Being able to simulate the universe is an ambitious task even for supercomputers. We need to simplify galaxies, and particularly the structure of matter to get even close.
And here it is, the big numero uno application for infinite speed. At the moment, we use lots of short cuts to model the universe around us. How would that change if speed was no obstacle? Well for starters, we can forget the fast but rough approximations of continuum mechanics (fluid/solid mechanics). Yes, even forego the lesser generalizations from statistical mechanics, and instead, go straight for a purely numerical/computational solution: Brute force molecular dynamics would let us simulate all particles interacting with each other. But hang on, since we've got CPU cycles coming out of our ears, why stop there? Solving problems involving the motion of fluids with quantum dynamics might seem a bit like overkill, but if we're in this for the long haul...
    [source]:
    "Quantum theory in principle allows us to predict the structure and reactivity of all molecules, but the equations of Quantum Theory become intractably complex with increasing system size. Exact analytical solutions are only possible for the smallest systems and for almost all molecules of interest in chemistry and life sciences no such solutions are known to us."
One useful application would be in the field of aerospace, This excerpt is taken from Frontiers of Supercomputing II, Chapter 9 - INDUSTRIAL SUPERCOMPUTING, (Kenneth W. Neves):
    Why Use Supercomputing at All? (p335)
    [...]. One creates a geometric description of a wing, for example, and then analyzes the flow over the wing. We know that today supercomputers cannot handle this problem in its full complexity of geometry and physics. We use simplifications in the model and solve approximations as best we can. [...]. Smaller problems can be run on workstations, but "new insights" can only be achieved with increased computing power.
Until then, we'll need short cuts such as kludgy heuristics or using analog machines to measure the casimir force, a problem which is too complicated for digital computers as of yet.

One of the most powerful ideas would be to use genetic programming (or rather, a simpler Brute Force search) to find solutions to general-purpose problems. Assuming a complete understanding of the 'theory of everything', the only remaining challenge which computers can't really do, is to define the scoring mechanism (or fitness function as it's known in the AI world).

Nanotechnology would get a boost too, as masses of computing power is needed to construct nanotech equivalents of normal size mechanisms such as bolts, screws, valves, wheels, hinges and more complex machinery.

Taken from: Frontiers of Supercomputing II - Chapter 8 - THE FUTURE COMPUTING ENVIRONMENT - Molecular Nanotechnology (Ralph Merkle)
    In the same way, we can model all the components of an assembler using everything from computational-chemistry software to mechanical-engineering software to system-level simulators. This will take an immense amount of computer power, but it will shave many years off the development schedule.




GO TO FIRST PAGE... (1)

Page 1 | 2 | 3


Offsite Links:

A nice thread asking the very same question
The Infinity Machine - A more technical slant, discussing the mechanics of how a theoretically infinitely fast computer may work. Also see this interesting follow up.
http://arxiv.org/pdf/math/0212047" - A research paper, expanding the concept of the Turing Machine to include infinitely fast processing.

Keywords for further research

  • currently intractable
  • quantum computation applications
  • Grand Challenge
  • brute force
  • NP, NP-Hard, NP-Complete, EXPTIME, 2-EXPTIME
  • curse of dimensionality
  • combinatorial explosion
  • Solomonoff Induction




  • Most pictures on this page are copyright of their respective owners apart from
    where no attribution is made - where they are copyright 2008 onwards Daniel White.