Marie Curie EU funded project on Asynchronous Networks



Visualization

Visualization can be an important tool for understanding dynamics of complex systems. We show some examples ranging from random asynchronous networks to the evolution of chimera states and "sloppy" computation. All of these images were created using sotware written by the Fellow.

  • FractalCircuit, Place dependent IFS with random jumps (click on image to enlarge).

    FractalCircuit

  • Patterns of synchronization for a place dependent random asynchronous network with 7 nodes.

    PDasync

  • Patterns of synchronization for a place dependent random asynchronous network with 6 nodes. Created for Workshop on Coupled Cell Systems, University of Porto, 2014.

    PYasync

  • Patterns of synchronization for a place dependent random asynchronous network with 7 nodes. Used as the main image for the poster advertizing a public lecture at Dynamics Days 2016, Exeter UK.

    PZsync

  • Cycling Chaos Movie (animated gif file of an asynchronous network with random connection structure that shows cycling between attractors when symmetry is broken. Large file; be patient).
  • Chimera movie (animated gif format). The movie shows the evolution of a chimera state for a 5000 node phase oscillator system. Computation was Runge-Kutta 4th order with a time step of 0.005. A frame of the chimera state was taken every 0.125 seconds. (Best viewed using a viewer with *no* post processing or anti-aliasing features.)
  • In the next image we show the evolution of weights in a network of 5000 coupled odd logistic maps with 'bang-bang' type adaptation. A total of 15,230 iterations are plotted.

    PDasync

  • A digital computer computes synchronously: computations are timed to a clock. If the computation is threaded (or parallelized), then pieces of the computation are done synchronously but the synchronous blocks are handled asynchonously. A problem with threaded programming is getting global synchronization of the threads correct. For example, if thread A is using computations of thread B, we need to be sure that thread B has finished its computations before thread A uses them. This is harder than it looks and the resulting code can be difficult to debug. After making some mistakes, we deliberately decided to only synchronize threads every hundred or so iterations (normally there are 4 or 5 synchronizations per iteration.) We call this a SLOGALS architecture (SLOppy GALS) and show the results of one such computation of 10,000 cells, 10 threads with synchronization every 200 iterations. The picture is close to the true picture computed without errors: providing dynamics is of right sort, and we are interested in qualitative results, we can sometimes be sloppy about the asynchronous part of the program.

    PDasync