Exercise 8: Final Project
Self-Organized Criticality





Last date for finishing all exercises and by that pass the course is:

Friday 6-11-2009
5 pm.



Introduction:

This is the last assignment of the course and provides your final chance for showing us that you have learned enough to write smaller FORTRAN programs. To pass this exercise you need to write -- from scratch -- a program that evolves a 2D system in a self-organized critical state (SOC). SOC has since it development in the late 80ties been used to address different dynamical systems that all are characterized by their non predictability. As examples of such systems can be mentioned the size and time distribution of earthquakes, large energy release events in the solar atmosphere known as flares, the size distribution of avalanches of sand grains on a pile of sand and even the stock prices. Like the game of life, SOC experiments are defined through a set of simple roles that alternates between stressing the dynamical system and releasing the stressed energy in locations where a predefined critical threshold is exceeded.
Having done the programming and tested the code, you have to write a short report describing your code, how to compile it, how to run it, how to visualize the results and show some example of the results. All of this will become more clear when reading the text below.

There is one condition that has to be made clear right from the start: This is a test of your capabilities of writing a Fortran code.

Therefore, the work has to be done individually, and not in groups!


It is not a problem discussing various issues with us or your usual mates, but you have to write the code, and solve the problems yourself.



Getting the required files:

Download the tar file to your ~/Dat_F directory and unpack it:

$ cd ~/Dat_F
$ tar -zxvf fortran8.tgz
$ rm fortran8.tgz
$ chmod 700 Exercise_8

This creates a new directory "Exercise_8" that contains the provided files for this exercise and allows only you to work with the content in it.



Self-organized Criticality

A simple setup to visualize is to have a disk and then drop sand grains onto it close to the disk center. As time goes a pile will appear and eventually it will form a cone with a close to constant slope. As new grains are added some will tumble down the sides before eventually finding a place to rest. Others will be more adventures and initiate an avalanche of grains that will slide down the side of the cone and vanish over the edge of the disk. In the laboratory one can construct this using a setup a little more complicated then shown in this cartoon:



Using a digital scale one can monitor the changes in weight with time and associate the negative changes as measurements of the avalanche sizes. Making this type of measurements provides, in all SOC cases, a distribution function of the released "energy" that follows a power law with a given slope. Depending on the slope of the distribution function either the large or the small events contribute most to the "energy" loss from the system. Depending on the system, it measures different quantities and not all of them are in Joule. In the sandpile experiment it is the mass loss that follows a power law, while for flares, it is the released magnetic energy and for earthquake models the energy released in each event.

A problem with realistic models is that it is not easy to conduct experiments that work on realistic length scales and parameters. Instead model experiments are created trying to reproduce the behavior seen in reality. For earth quakes the simplest model is to take a brick, place it on a table and pull it across. This can be done by attaching a string with a force measuring unit to it. Then use a force that is just about balancing the friction, using a motor. If the force is just above the threshold for beating friction, the brick will move forward in "jumps" and the force measuring unit will change value dependent on how much forward the brick moves each time. The change in force is then a measure for the "energy" release and will result in a distribution function of force changes.
The expectation is that results obtained from this type of experiments can be extended to the large scale phenomena, implying the assumption that a SOC system does not contain a characteristic length scale.

A different approach to the laboratory experiments is to generate a computer model that analysis the same kind of system, allowing one to freely change the rules and control parameters to see how they influence the final result.

The best know SOC system is the sandpile. As it is also one of the simplest, we are going to make a model to simulate this on a computer. To do this we need a set of rules that can easily be applied and then ask the computer to repeat the basic event many time to improve the statistics of the distribution function.

In a scale invariant system one assumes the power distribution function to have no beginning (minimum event size) and no end (maximum event size). Neither of these assumptions are naturally true in reality. The lower limit is determined by the properties of the material involved in the process (atom size...) and the upper end relates to the maximum size of the unstable domain. Between these limits one expects the power law distribution to take up a significant fraction. That SOC systems have an inertial range (power law distribution) implies that the characteristics can be obtained from using relative small domain sizes -- in this case small numerical experiments and convergence of the result can be tested by simply doubling the numerical resolution of the experiment.



SOC sandpile model:
You are to reproduce a simple 2D sandpile experiment. To represent the disk, onto which the grains are dropped, you will use a 2D array of dimension (N,N). Where N is a variable number between different experiments. This array contains the number of grains piled in a column above this point on the plate. You should use one of two possible definitions of the disk. Adding grain at a single point, as is the default setup, creates a very systematic pattern. Instead we add a little noise onto this pattern by using Gaussian distributed random position centered on the disk (array) with a half width such that only a few grains (less than 1%) are dumped more than the give radius from the center of the disk. Grains are dumped one by one and the criteria for instabilities are investigated after each dump and eventual avalanches are allowed to evolve such that the pile of grains are always in a stable state before adding a new perturbation to the system. For each avalanche, the total mass loss across the chosen plate boundary is measured and its size is stored for later analysis together with the number of iterations it takes to end the avalanche event and the total number of cells involved in the avalanche.
This process assumes that the timescale for adding grains is much larger than for the avalanche to take place. This process, adding a grain followed by a relaxation of the pile, is repeated many time, such that a sufficiently large number of events are registered in the data base. The simple analysis of the data is simply to make histogram counts for the results and present them in a plot.

All the above is just describing the situation in words. To actually get this to work, you need to see the rules that initiates the avalanches and keeps them going until a new stable configuration has been reached.
Use the 2D variable z(x,y) to contain the number of grains at a given location. Use the random generator to chose a location to add one more grain
As you can see from the rules above, adding one grain to a given position in the grid changes only this points value and possible to a value above the instability criteria. Therefore after adding a grain one only has to check if the new z value exceeds the critical value. If this is the case, grains are to be redistributed accordingly and one must subsequently investigate the 4 neighboring points to see if any of them exceed the criteria. Each instability gives rise to a ring of points that needs to be checked as they can potentially have become unstable. This process continues until no new points are unstable. To prevent this process to continue infinitely, special care must be take at the edge of the chosen domain, where the grains placed outside the physical domain represents the grains loosed from the pile. Adding these gives the size of the avalanche, after which these must be reset to zero.

Lets bring this into a more concise formulation of the problem: Traditional the instability criteria is set to zcrit=4. Making it larger makes no difference to the distribution function as it only provides an offset in the number of not changed grains in the z array.



Programming Assignment:

Before you start coding the final task, take the time to understand the information above. It is vital that you have a clear picture in your mind, or rather on paper, with regard to what needs to be done. Which conditions needs to be included and in which order. How can the program be structured, i.e. how can the problem be divided into small sub tasks and how do they all fit into the global puzzle --- you may benefit from making a pseudo flow diagram of the code you have to write. When this is in place, there are a number of requirements to the way the program is written:

Program requirements:

You have two options. Either write everything from scratch or get a little help. The most tricky part in the program is setting up the double sorting tree structure. It can be done as a careful extension of the sorting program you worked with in the previous exercise. The other possibility is to download this tar file to your exercise directory, which contains a not all working file that may help you on the way to a fully working list handler.

To use the included random number generator you have to do: This returns one Gaussian distributed real number in the variable tmp with a mean value 0 and a 99% likelihood radius of 3. Scale the radius to a given percentage of the domain size and add an offset to place it at the center of the domain.

A simple test run that show the distribution functions can be done using a grid of size 50 and 1/2 a million grain dumps. Allow the dumping region to be on the order of 5% of half the grid dimension. Using the fast approach for tracking potentially unstable points, it should run in about a minute.

This is the results I get using plot.jou, with the values above:





Report:

Apart from the programming part in this exercise, you are also required to write of a short report. The report is supposed to provide the information outlined below, showing that you have an idea about the basic problem, the layout and use of the working program. Further to this there must be a description of how to compile and run the program and how to generate the included plots. This report has to be detailed enough that one of your fellow students not following the dat-f can compile and use your program to reproduce the plots. Here is a list of headlines to address in the report:
You are free to write the report in Danish or English. With regard to the format of the report, it can either be provided as a pdf file, an openoffice or word document.

Before you are finished, collect all the files important for us to evaluate the project in a single tar file called project.tgz. For this you need to do something like this:

  $ tar -zcvf project.tgz Makefile *.f90 Report *.in gnu_plot_scripts/idl_scripts

where the names after project.tgz represents the files you want to include into the tar file. You may want to include this in the Makefile and then only need to write something like:

  > make project

I have packed the report and code into project.tgz and I am ready to submit it Credits: 1/-1

This time submitting the project file is not enough. Because your Exercise_8 directory is only accessible for you, the project.tgz file must be emailed to kg@nbi.ku.dk.


Finally, make a "cleanup" of the files in all the Exercise directories that are not required. Go into your Dat_F directory and use du -s * to find the large directories.....



References:

For the ones interested in the background material for the exercise:



Thank you for choosing Dat-F as one of your course in this block

    The Dat-F team



$Id: index.php,v 1.7 2009/10/23 09:45:20 kg Exp $ kg@nbi.ku.dk