Suresh

Week 3 - Why do we use simulation to study something?

Simulation allows for multi-faceted or multi-dimensional modeling of complex systems and its agents in a way that is difficult using other approaches (Marney and Tarbert, 2000). Where orthodox approaches may not be viable, Simulation may be handy due to its emergent behavior in modeling artificial life or complex systems. Simulations enjoy particular advantage in modeling of aggregate social outcomes from heterogeneous individual behavior. What is vital here is the focus on the local agents, their heterogeneity, their local interactions, ability to accommodate the various sub-systems and how they are interrelated and interacted, how the whole dynamic changes with a change in a few parameters; this complex dynamic modeling can be better studied using simulation than other conventional approaches. The ability for cheaper experimentation and the flexibility to change a few parameters to see how a newer system emerges highlights the power of simulation in general and computer simulation in particular. It enables us to understand the dynamics of interactions that may be more significant than the equilibrium of the underlying behavioral processes. If we buy the argument of LeBaron, Arthur and Palmer (1997) that the markets never really settle down to equilibrium, we have a task at hand to study how it is transitory. If it implies that the market moves from one stable state to another, one critical state to another and swings between order and randomness; simulation is the tool to study such a dynamic space and to represent the unfolding of a dynamic process where global outcomes are generated by the heterogeneous local decision rules and local interactions.

If Axtell (1996) advocates a bottom-up approach to study the emergent behavior and the local interactions, simulation offers a convenient way of counter-factual experimentation with historical development and contextual background and yet focusing on the local interactions by considering the interrelationships among the individual agents or in other cases, the various systems and their sub-systems. All the while, it offers tremendous scope for a c-domain/p-domain correspondence as it is straightforward to compare the results of another parallel methodology with simulation dynamics. In addition, there is scope to replicate real-life dynamics driven by continual disequilibria. In summary, where other methods might focus on the aggregate without considering the temporal and transitory aspects, Simulation is encompassing with its reverence to time and context dependent localized behavior and its flux, the first touch point for a bottom-up and a possibly simplified analysis of complex systems.

Simulations in some cases may be context specific as in Kliemt’s (1996) Store Simulation example. But where they stand out is the cost of experimentation in understanding what in principle might work and might not work. They are still deemed to be rich enough and relevant for practical purposes at hand. Alternatively, when we have vague ideas about which kinds of mechanisms might lead to certain phenomena but there might be various possible explanations, simulation can be handy to gain some fundamental understanding of the process itself. For example, understanding the galactic phenomenon where no mathematical models can be formulated, simulations can assist by beginning with a threadbare model and thereafter juggling the parameters, adding more parameters, defining and refining assumptions and try to formulate the fundamental independent knowledge of the process which otherwise is not possible to formulate. At least, a platform is set for disciplined speculation which otherwise is not possible. When not possible to have a process to 'imitate' by simulation or to form a c-domain/p-domain correspondence with another technique, we can begin to define one in a disciplined and emergent way. We replicate processes by imitating or we learn while we do to formulate an original process itself; both possible by simulation. This is where simulation’s heuristic capabilities come to the fore to develop hypotheses, models and theories by juggling with the parameters and finding new and simple regularities that would not have been extracted from the assumptions. Here the emergence can be the result of refining the assumptions and/or by trial-and-error method of educated guesses. But we can see emergence everywhere beginning at the local level and be aggregated up in a bottom-up fashion.

Some Applications and Advantages

I will begin with some real-life examples where simulation has a bigger role to play than any other technique. I have brought in some concepts from the readings as required and fit.

1. Simulators in space aviation – though we read about the flight simulators, the space aviation arena is more challenging due to the conditions it imposes on the human subjects. Apart from the constraints on the space ship that need to be understood for a successful mission, there are significant factors influencing the human behavior and ability while the shuttle is in the space. What other technique can be handy to replicate the situation without comprising on the ethical and cost dimensions. Here we can address the technical and ethical challenges.

2. Car Crash and Safety Tests – one can clearly see the benefits of simulation wherein there is enormous scope to test the various configurations of parameters without damage to property or persons. End result also includes the reduced cost in testing that can directly affect the pricing. Let us not forget risking the humans. The use of computer-aided engineering (CAE) and virtual prototyping technology is critical in reducing reliance on physical prototypes, reducing cost and shortening the overall product development process.

3. Policy Making – We read an example of the tax hike that may not be feasible both ethically and pragmatically. But deeper down, there are several other budgetary decisions the Government may want to weigh without the citizens getting known or impacted. Changing the parameters of the economic indicators and seeing their macro influence can be feasible only with simulation capabilities to be done efficiently, ethically and as a low cost experiment. Here simulation helps both for heuristics and as a substitute for the real task. Similarly, there are companies like AnyLogic that provide tools to simulate city planning and urbanization.

4. Simulation in Education – of civilian and military personnel where other methods are too expensive or prohibitive. In such scenarios, we can learn valuable lessons in a safe virtual environment. Additionally, there is scope to record and observe the experience and behavior from the simulation run.

5. Emergency Planning and Disaster Response –We already read a case about how simulation is one of the first sources to see the regularities or patterns in earth quakes. Though no human effort is successful to predict earth quakes, we now are equipped with some predictive capability to forecast earthquakes by studying regularities, one of the core principles of simulation science itself. While it is understood that disasters are replicable for training purposes, simulation tools are being used for forest fire fighting, oil or chemical spill response, earthquake response, law enforcement, municipal fire fighting, hazardous material handling, military training, and response to terrorist attack. Simulation enjoys clear advantage over any other technique in terms of the predictive capabilities.

6. Pharma and Healthcare – While pharma formulations run on chance, simulations provide cheap and effective opportunities to facilitate different molecular configurations. In the healthcare industry, medical simulators are being developed for therapeutic and diagnostic purposes. Newer tools that actively reproduce the human anatomy (active models), that are interactive in responses (interactive models) and computer simulators that assess the learner skills are increasingly being deployed. Some of these address the predictive training needs that prepare the agents for disaster response while some help study the historical antecedents that may lead to unknown situations i.e. both deductive and predictive in nature.

7. Software Development - Tools can be used for developing conceptual and architectural design of software-intensive systems. Rather than relying entirely on logical and physical system specifications before entering detailed design followed by implementation, simulation enables evaluation of system architecture behavior and identifying major flaws or shortcomings in a system’s architectural specifications in the early stages of the analysis and design phases. Simulation of system architecture produces benefits such as higher quality architectural specifications and reduced costs.

Within the software design, sometimes simulation can be a concept driven approach wherein you will add parts, remove parts to the system based on the judgment of various parameters like integration feasibility, functional value, cost, effort etc. Here the soul is the concept driven approach where you have the target system in mind but simulation helps you to touch and weigh each of the intermediate states of the system under various permutations and combinations which you might not be able to think of or able to do without simulation.

8. Process Design and Redesign - Organizations that redesign their processes before embarking on any major initiatives like ERP implementation can use simulation tools to envision the new processes and compare them with the old processes. Proper process design incorporates simulation software in order to accurately model a process to ensure that the process can be designed safely with the lowest total cost of ownership by increasing the efficiency of the design engineers and reducing the time needed to complete an accurate design. It is done without any disruption to the current business as it is a laborious task.

9. Supply Chain Management – Simulation is a valuable tool in formulating the supply chain strategy and prepare the risk management mechanisms. There are market standard simulation software available to optimize the supply chain planning by including several parameters like the world-wide sourcing locations, transportation costs, tax regimes, tariff structures, route planning, transportation cost planning, capacity planning etc and determine optimal sourcing strategies. For example, Intel could avert the losses from SARS due to its ability to simulate the scenarios quickly. It noted that the China plant has a potential hit by the epidemic and hence diverted the expectations to utilize the excess capacity in its Indonesia plant. It could quickly simulate the impact on the business by using simulation effectively. It is a different story that Intel had the ‘Exact Match’ plant design policy that made this simple. Companies like G-Log have simulation tools that help you model on how you organize your materials at the truck level to optimally load a truck. The Intel example highlights the capacity planning on one side and the disaster response planning on the other.

References

1. "Why do simulation?", by J.P. Marney and Heather F.E. Tarbert. Journal of Artificial Societies and Social Simulation, 3(4), October 2000.
2. "Simulation and rational practice", by Hartmut Kliemt. In Rainer Hegselmann, Ulrich Mueller, and Klaus G. Troitzsch, editors, Modelling and simulation in the social sciences: From the philosophy of science point of view, volume 23 of Series A: Philosophy and methodology of the social sciences, pages 13-27. Kluwer Academic Publishers, 1996.
3. "Social science simulation —- Origins, prospects, purposes", by Klaus G. Troitzsch. In R. Conte, R. Hegselmann, and P. Terna, editors, Simulating Social Phenomena, pages 41-54, Springer, 1997.
4. "The world as a process: Simulations in the natural and social sciences", by Stephan Hartmann. In Rainer Hegselmann, Ulrich Mueller, and Klaus G. Troitzsch, editors, Modelling and simulation in the social sciences: From the philosophy of science point of view, volume 23 of Series A: Philosophy and methodology of the social sciences, pages 77-100. Kluwer Academic Publishers, 1996.
5. Wikipedia – Computer Simulation

Week 4 - When and why would you do simulations instead of, or in addition to, mathematical analysis like neo-classical economics?

In situations where the complexity increases and moves beyond the human rationality or when the objective, well-defined and shared assumptions cease to apply, the problem becomes ill-defined. In such complex situations or ill-defined scenarios, the traditional deductive logic that was a tool for traditional analysis becomes ineffective. This is because the observables tend to be sparse and it is tough to encapsulate them with a set of mathematical modeling equations. In such situations, we tend to look for patterns and simplify the problem by using inductive logic to construct internal models or hypotheses to work with. We use working models to fill the gap in our understanding and deal with ill-defined problems. When deductive logic ceases to comprehend the evolution, Simulation plays a significant role in gaining this understanding of these problems by imbibing this inductive logic. Thus the agents will adapt in relation to the environment and will adopt or discard their beliefs, mental models and hypotheses by mutually adapting them in relation to the aggregate environmental changes.

In addition, when the mathematical model is non-linear and where simple analytical solutions are not available or expensive, simulation is a feasible and low-cost option to experiment different scenarios. Rather than working on a mathematical analytical solution to the problem, computational models can assist to define the parameters and change the parameters to try different outcomes. These two scenarios highlight where computational modeling can clearly address the failings of the traditional modeling.

Where the simulation approaches may score over the traditional analysis used in mathematical models and economic models also includes the temporal-bound situations. Here the ability to induct the simulation analysis at various points in time comes to the fore as compared to an analysis at a fixed point in time. The simulation methodologies also enable conducting what-if analysis using multiple parameters, juggling with the parameters and able to conduct iterative experiments at low cost. It need not necessarily mean that the simulation methodologies do not lack the traditional rigor. Simulation opens a new set of questions and generates phenomena that have multiple dimensions. At least, simulation makes it possible. At times, mathematical analysis can be complementary to erase some of the mystery around some of the dimensions of the simulation modeling. Thus wherever possible, while simulation opens new vistas of thinking by its emergent and adaptive behavior, mathematical modeling can complement it. Mathematical modeling thus can explain a few dimensions with the orthodox vigor and provide explanation for some of the dimensions. But simulation scores with its emphasis on the local level interactions than aggregate analysis and with its ability to accommodate the temporal dimension.

Delving deeper into the simulation approaches, the agent based modeling and the equation based modeling approaches both deal with observables and accommodate the temporal aspects that I highlighted in the earlier paragraph. Each has its advantages though agent based modeling deals better with the individual behaviors and how they interact with each other while the equation based modeling captures the relationship between the observables in the form of equations.

Week 4 - Game of Life Patterns

I liked the following 3 patterns in the Golly library -

1. SediMental and Lava under the Generations folder
2. Langton's loop under the Loops sub-folder.

Week 4 - Questions to consider

Here I am providing my answers to the questions that were asked in the various sections of the syllabus for the week.

1. What is the Game of Life?

  • Evolution based on the initial state and a simple set of rules
  • How complex patterns can emerge even if we begin with a simple set of rules
  • The emergent and self-organizing behavior.

Game of Life means that its evolution is determined by its initial state, requiring no further input from humans. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.

The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, live or dead. Every cell interacts with its eight neighbors, which are the cells that are directly horizontally, vertically, or diagonally adjacent. At each step in time, the
following transitions occur:

  1. 1. Any live cell with fewer than two live neighbours dies, as if caused by underpopulation.
  2. 2. Any live cell with more than three live neighbours dies, as if by overcrowding.
  3. 3. Any live cell with two or three live neighbours lives on to the next generation.
  4. 4. Any dead cell with exactly three live neighbours becomes a live cell.

The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths happen simultaneously, and the discrete moment at which this happens is sometimes called a tick (in other words, each generation is a pure function of the one before). The rules continue to be applied repeatedly to create further generations.

The patterns that emerge from the simple rules may be considered a form of beauty. Small isolated subpatterns with no initial symmetry tend to become symmetrical. Once this happens the symmetry may increase in richness, but it cannot be lost unless a nearby subpattern comes close enough to disturb it. In a very few cases the society eventually dies out, with all living cells vanishing, though this may not happen for a great many generations. Most initial patterns eventually "burn out", producing either stable figures or patterns that oscillate forever between two or more states; many also produce one or more gliders or spaceships that travel indefinitely away from the initial location.

2. Why is Life So Interesting?

Life is one of the simplest examples of what is sometimes called "emergent complexity" or "self-organizing systems." This subject area has captured the attention of scientists and mathematicians in diverse fields. It is the study of how elaborate patterns and behaviors can emerge from very simple rules.

3. How do the individual cells in a GoL evolve?

  • There should be no initial pattern for which there is a simple proof that the population can grow without limit.
  • There should be initial patterns that apparently do grow without limit.
  • There should be simple initial patterns that grow and change for a considerable period of time before coming to an end in the following possible ways:

•Fading away completely (from overcrowding or from becoming too sparse), or
•Settling into a stable configuration that remains unchanged thereafter, or
•Entering an oscillating phase in which they endlessly repeat a cycle of two or more periods.

4.What causes the complexity of GoLs to appear?

Lower level interactions cause the emergence but once you reach but if you take a human, you cannot use the interactions of the cells to predict how he looks or what he does. The aggregates emerge from the low level interactions but the aggregates cannot be explained by using the low level entities.
The concept of emergence is key to the model. New objects and properties emerge from a lower level of interaction, in such a way that these new objects and properties can't be understood or predicted in terms of the way their lower-level parts interact. Furthermore, these levels of complexity are built on top of one another. This resonates perfectly with the structure of high-level organisms - the organism emerges from the layer of cells, cells emerge from the layer of molecules, molecules from atoms, and so on. There is nothing about the interaction of individual cells in a kangaroo that can be used to predict what that kangaroo will look like (or what it will do next). In other words, phenomena at a certain level can not be reduced to, or explained by, the level underlying it.

The emergent phenomenon of interactions based on the rules. Even though the initial state is known and the set of rules are simple, it is impossible to predict whether an arbitrary starting pattern will die out, or start oscillating, or expand forever. Life and other CAs provide a powerful demonstration of how a very simple system can generate extremely complicated results.

5.Is the GoL deterministic or stochastic? Whatever you answered, what changes to the game rules would change your answer?

  • One article mentioned that we need to observe the activity after step no. 1107 it will turn something else. This is not possible to observe unless it is deterministic. We can see the hint in this article we read.
  • two identical grids with the same starting conditions will be identical to each other, every step of the way
  • The game of life is a deterministic process, the outcome being strictly predetermined by the geometry of the board and the rules of creation and destruction of the checkers on the cells.
  • The evolution of the game is determined solely by its initial state. . The experimenter creates a starting pattern of living cells, and then runs a simulation to see what becomes of it based on special rules that are followed.
  • The state of a cell at a given timestep depends only on its own state and the cell states in its neighbourhood, all taken at the previous step.
  • Computations are determined based on the adjacent values i.e. the players presence in the adjacent cells.
  • Tough to predict but the repeatability is 100% for the same initial set of rules and initial set of settings

As the game progresses, many checkers die, and new ones are born. Of course the game is deterministic, the outcome as time progresses is strictly a determined result of the initial distribution of the checkers, and, of course, the particular set of rules chosen. Some initial distributions result in nothing interesting. Some quickly develop stable patterns of counters, and sometimes these patterns are in motion across the board, some are stationary and "oscillate" or "blink", some do so while in motion. When two patterns happen to "collide" new patterns often form, but sometimes the two colliding patterns annihilate each other.

How to make it stochastic:

• Bring in probabilities instead of deterministic rules - An increasing neighborhood density could increase the probability of turning "on" rather than deterministically "switching" at a cut-off value.
• Transition rules integrate a probability function.
• Playing on structural and functional rules.
• The first ones define the spatial structure of the automata network, that is its number of dimensions, the disposition of cells (squares, hexagons,… in a two dimensional automaton) and the type of neighbourhood determination.
• The second ones will determine the number of states and the transition rules The choice of these two types of rules permits to build a universe adapted to the demanded aim.

6. What is meant by an "emergent phenomena"? Give an example from the GoL.

 Emergent phenomena are often unexpected, nontrivial results of relatively simple interactions of relatively simple components.
 Here there are complex systems emerging based on simple rules and simple local level interactions.
 The whole GOL is an emergent phenomena.

7. What type of (simplified) social processes might be modelled by the GoL?

• Afforestation, deforestation, spread of cities i.e. Urbanization
• Spread of diseases
• Traffic patterns
• Behavior of viruses, animals and human cells
• For models of diffusion in neighborhoods, one could extend neighborhoods to use the influences of neighbors inversely proportional to their distance

Example description of disease spread;
 The index case is located in the center of the grid and the disease starts spreading from there.
 After time ‘t’, a group of cells in the center get infected. Some get infected, some die and some are not infected.
 It reaches a stable state or can turn into an epidemic which is a metastable state.

Week 5 - Turtles, Termites & Traffic Jams - Book Summary

Here is some notes I took while reading the book. I deliberately skipped the chapter 3 as it is more hands-on.

Chapter 1 & Chapter 2 - Notes

1. Orderly patterns arise from local interactions among decentralized components.

2. Self-organizing without a leader is the hallmark of the decentralized mindset propounded in this book.

3. Examples of the former USSR, IBM, your mind, distributed computing etc highlight the move towards decentralization in the last 2 decades.

4. Industry examples like Quality Circles show the move in the commercial sectors.

5. IT has a role by decreasing the coordination costs and shifting the role towards outside markets.

6. Decentralized implies viewing the world as decentralized interactions and feedback loops - network patterns arising out of interactions.

7. Self-organizing implies creating order out of low level randomness.

8. Chaotic paradigms of the past imply unpredictable behavior out of low level deterministic rules.

9. Note the difference between the self-organizing and chaotic scenarios. One is randomness leading to order and the other is determinism leading to uncertainty.

10. Frued - self emerges out of low level interactions.

11. Theories of Knowledge - meaning itself is decentralized.

12. Constructionism - new ways for teachers to instruct vs new ways of learners to construct. They do not manipulate objects, they construct meaningful objects.

13. LEGO/Logo - Page 30 - there are three levels to interpret the decentralization world:

a. mechanistic - how one piece makes another move

b. information - how information flows between one device and another like between censors and computers

c. Psychological - intentionality or personality to the creatures.

14. LEGO/Logo suffers from centralized control as one one part tells the others what to do.

15. LEGO/Logo does not have births etc that can make the system more alive if present.

16. Starlogo is more dynamic as we can see more turtles, their colonized local behavior, turtles' ability to sense and the reified world where they interact with the environment.

17. Per the point above, turtles and the environment together leads to distributed cognition.

18. Starlogo has three kinds of objects - Observer (a kind of superior force), turtles and patches.

19. Starlogo object orientation implies the user is encapsulated from mathematical terms like cartesian coordinates.

20. Starlogo parallelism - multiple objects like numerous turtles and the environmental patches interacting at any one given instant.

Chapter 4 - Notes

1. Why we are bogged down by centralized thinking?

a. Our belief in God like control to explain the world phenomenon
b. Look for cause that drives from the center
c. We think phenomenon like extinctions are exogenous.
d. Lead or seed - some leader or reason marshaling us to do.
e. 'I' - the central actor in charge.

2. Starlogo intrigue is that we don't know what the whole system will do together.

3. Decentralized - more independent & less predictable.

Guidelines for heuristic thinking

1. Positive feedback isn't always negative - as can be seen from Silicon Valley. Slime molds example shows how positive feedback extends the puddle as more cells get attracted.

2. Randomness can help create order - random fluctuations create the seed for the patterns to grow. Positive feedback helps explore a food source while randomness helps explore multiple food sources as seen in the Ant example.

3. A flock isn't a big bird - frogs & turtles evolve more segregated than what they actually want.

4. A traffic jam isn't a collection of cars - local interactions create the jams. Hence the story is not about cars but their interactions at the local level.

5. Hills are alive - the patches i.e. the environment has a constraining & influencing behavior. The patches i.e. the environment creates interactions by combining with the agent behavior.

Chapter 5 - Notes

1. Why we toggle between centralized and decentralized thinking is that:

a. When thinking about cause and effect, we focus on the cause which is the underpinning of centralized thinking.

b. We explore the mutual relation between cause and effect to have decentralized view.

2. Inculcating decentralized thinking needs a comfort to play design, play & explore the world which in turn leads to more comfort with decentralized thinking.

3. We may need micro-world construction kits to facilitate this exploration and thereby construct meaningful objects.

4. Decentralized thinking should supplement the centralized thinking.

Week 6 - Ideas and tentative models for term paper

Per the class discussion, I am furnishing the details of two models discussed. But these may change based on the theoretical foundations Prof. Scott has asked to build the models on.

Social Capital - Bowling Alone

"We are not talking here simply about nostalgia for the 1950s. School performance, public health, crime rates, clinical depression, tax compliance, philanthropy, race relations, community development, census returns, teen suicide, economic productivity, campaign finance, even simple human happiness — all are demonstrably affected by how (and whether) we connect with our family and friends and neighbours and co-workers.

And most Americans instinctively recognize that we need to reconnect with one another. Figuring out how to reconcile the competing obligations of work and family and community is the ultimate "kitchen table" issue. As practical solutions to the problem become clearer — a radical expansion of the Family and Medical Leave Act is my current favorite — the latent public support for addressing the underlying issue will become an irresistible "market" for ambitious political candidates." - Robert Putnam (Atlantic Unbound interview 2000).

While Putnam's argument highlights the oscillating trends in the social capital in the American life, it also presents an opportunity to model such a scenario wherein the social capital is heavily influenced by the emerging technologies.

While telephone has facilitated enriching the social capital by fostering connectivity, television has been a living room experience for family get together. Internet based personal computing media have isolated the individuals while internet may have given social connectivity unheralded. So internet based technologies for connectivity have either isolated or socialized the individuals based on how you view them. It will be interesting to model these social and technological phenomenon and see how the social capital has fluctuated with the technological emergence.

I can capitalize some of the learning from the two models we discussed today about innovation and the markets that have the silico capabilities to create regions of user presence and interaction.

Co-creation & Innovation

While organizations have embarked on various exercises to innovate their products or engage in new product development, co-creation paradigm as advocated by Prahalad & Ramaswamy in their HBR gives new ways to connect with the customers. Engaging the customers to create mutual value by using engagement platforms is the hallmark of the co-creation proposition.

I would like to model different types of users i.e. lead users, moderately attached and unattached consumers of the organizations. I will pit them against three types of organizations - those believing in co-creation, those trying to join the bandwagon and those ignoring the paradigm. Trying to model this and understanding what combinations create mutual value for the consumers and the companies will be a humble attempt to simulate this management philosophy. Finding the results and comparing them with the theoretical work will be interesting.

I would like to use some of the learning from the 2-3 models we saw today so that I can create the required segregations among the users and the companies.

Week 7 Readings - Seeing Around Corners

Computers will probably never enable us to foresee the future in detail but we might learn to anticipate the kinds of events that lie ahead and where to look for interventions that might work. Look for small interventions that produce maximum result.

Complicated social patterns can emerge even when individual people are following very simple rules, and how those patterns can suddenly shift or even reverse as though of their own accord.

You will get more segregation than expected if you put people together and just let them interact.

Schelling's model implied that even the simplest of the societies could produce outcomes that were simultaneously orderly and unintended; outcomes in no sense accidental or no sense deliberate.

The only way to discover what pattern will emerge from a given set of rules and a particular starting point is to watch for the results – unpredictable patterns emerging out of local interactions.

Complexity = the generation of spontaneous order and intricate patterns from seemingly simple rules.

Sugarscope model showed skewed distribution of sugar equivalent to skewed distribution of societal wealth. Where it differs from traditional models is the heterogeneity of the agents – agents having unique preferences and knowledge as coded. Agents have varying personalities and limited information and they exhibit bounded rationality – making most rational choices based on limited information.

Traditional modelers from social sciences looks at social patterns in the real world and tries to write equations that describe what's going on. They see the world from the top and try to fit it to regular lines and curves and then use it to make predictions.

Hammond's model of corrupt society tipped from one behavior to another. When cracking down begins, many agents begin to behave honestly, corruption ceases to pay and everyone turns honest. The transformations seen in Hammond's model are due to the heterogeneity of the agents. No trasformations were seen when the agents acquire one behavior half the time as everyone knows how the other behaves.

Ziff's law of size vs rank statistical relationship. The overall distribution of large and small avalanches follows a power law even though the size of a particular avalanche is always unpredictable.
Complex systems follows patterns when observed over time and the patterns which the individual actors are unaware of generating.

Even looking at the emergence of cities or the distribution of talent shows the higher order patterns into which human beings and thus societies unconsciously arrange themselves.

Artifical genocide – socieities that order themselves systematically but unconsciously can disorder themselves systematically and unconsciously.

Individuals make choices based on local conditions and incentives which may suddenly show up the savage field.

Social patterns are produced by mindless packets of data following a few ridiculously simple rules. No agent is knowingly copying his peers or following the crowd. No agent is consciously organizing a self-protective enclave. All the agents are separately and individually reacting rationally – according to rules, the local conditions and the local conditions that are altering rapidly.

Disparate social breakdowns in separate parts of the world might have common dynamics.

Anasazi simulations show that the modelers came within a short distance of replicating the large trends of real socieities though not able to explain why they vanished. This is a significant move demonstrating the power of simulations. We wont know where we will be blindsided but we may learn which direction we may be hit from. We may look for small interventions that can have large, discontinuous consequences. On the flip side, we may know the large interventions with low returns.

In summary, artifical systems take us near to modeling the real socieities. We can see the patterns emerging out of simple local rules. We are much closer to understanding the complex phenomenon. Heterogeneity of the agents and limited knowledge are some of the features. Artifical systems are bottom-up to see the phenomenon arising out of simple local interactions. Traditional modelers from social sciences looks at social patterns in the real world and tries to write equations that describe what's going on. They see the world from the top and try to fit it to regular lines and curves and then use it to make predictions.

Week 8 - Evolution of Cooperation - Chapter 6

Chapter 6 – How to do effectively

  • The Prisoner’s Dilemma shows that a pattern of mutual cooperation in the long run is better.
  • Durable iterated Prisoner’s Dilemma advocates the following 4 principles

a. Don’t be envious
b. Don’t be the first to defect
c. Reciprocate both cooperation and defection
d. Don’t be too clever

1. Don’t be envious

• In games, people use a standard to compare and typically this is the score of other players
• To reduce the score of the opponent, defection comes into play and a self-destructive pattern of mutual defections emerges.
• TFT elicits such a behavior from the other player so that both do well.
• So consider how another strategy can be better by comparing one’s pay-off to oneself and see how it can be maximized.
• Don’t go after the other. For example, try to maximize your profit than cry over the supplier profit.

2. Don’t be the first to defect

• Tester tried for first defection and tranquilizer occasionally defected but neither did well.
• A population of nice rules is tough to invade and they resist the invasion by a cluster of other rules once they resist the invasion of a single rule.
• Future interactions are important and hence TFT comes into play.
• Example of Gypsies vs the garbage collection authority vs the doctors
• Will cooperation get reciprocated? It is tough to tell but even a small cluster of nice rules can invade a population of meanies.
• Harrington initially did well with lower ranking entries but as time progressed, there were fewer lower ranking entries as the rest fizzled out.
• Long run viability was lesser in such a case and the environment got fizzled out.

3. Reciprocate both cooperation and defection

• TFT thrives in its interactions with other successful rules and won both the tournaments.
• It is provocable and prevents exploitation or escalation by being one time provocative.
• TFTT had more forgiveness but was exploitable in the second round
• Thus optimal forgiveness is required and it depends on the environment
• Forgiveness is costly as easy going rules can be exploited
• Forgiveness is ideal as otherwise it may lead to unending recrimination
• TFT proved quite well with an optimal forgiveness.
• Hence it is a good advice to reciprocate defections and cooperation.

4. Don’t be too clever

• Complex rules that made wrong inferences did not do well as they assumed that the other player will not cooperate
• Some situations did not take into account that their own behavior drives the behavior of the opponent.
• The other player will look for signs and one’s behavioral signs determine what one will get. This is unlike chess where there is a guessing of the other people’s moves. Here the signs are read.

Week 8 - Evolution of Cooperation - Chapter 7

Chapter 7 – How to Promote Cooperation

  • This chapter deals with changing the strategic setting to promote cooperation
  • Prisoner’s dilemma is in itself a short game but cooperation and reciprocity are possible in a longer game.

1. Enlarge the shadow of the future

• No cooperation is stable when the future is short i.e. when the weight w is small.
• In small towns, weddings, live-let-live situations, the cooperation is over long term due to frequent interactions of the people.
• Even in arms treaties, small tasks make cooperation viable as short-term gains from cheating on the current step are less profitable compared to the long term stakes.

2. Change the pay-offs

• Long term incentives for cooperation should be made greater than the short term incentives for defection.
• Making punishment for defection by jailing should be made greater than the benefit from evasion. In such a case the prisoner’s dilemma fizzles out.

3. Teach people to care about each other

• Parents shape preferences about not only the individual welfare but also about social welfare
• Altruism may be fostered among kids but sometimes the activities like charity may be for other reasons like to create an obligation than to increase the welfare
• Altruism may be misconstrued. It may begin nicely but see how the other party is reciprocating. It should not treat it as weakness.
• Reciprocity is the basis for cooperation

4. Teach reciprocity

• Morality teaches unconditional cooperation – the Gandhian way
• Reciprocity is a better foundation for morality as it will prevent the exploiters from exploiting.
• Strategies like TFT may be modified to have lesser punishment so that the dampening effect is provided and no longer recriminations happen.
• Reciprocity + more forgiving than TFT leads to a policing without central authority to punish deviant strategies
• The others’ reciprocity helps policing the community to punish the deviants. It implies self-policing without a central authority.

5. Improve recognition abilities

• Humans are in better shape to keep in mind the past transactions
• Inability to recognize leads to inability to verify – differentiating between earthquakes and underground nuclear explosion tests needed an ability to verify which was unavailable in the past
• To recognize from the past and be confident about the past actions lead to better cooperation if the other player was nice.

Week 11 - Spatial Games

Note: Please note that I made slightly elaborate notes for future reference but used sub-headings for quicker reading.

Complex Landscapes in Economic Geography – Paul Krugman

Questions raised:

In this paper, Paul Krugman begins with the question of how complex is the economic landscape. The explanation led to sub-questions on what determines the complexity of the landscape, how does this landscape look like and why economics is about dynamic complex systems.

In his argument to advocate for looking at economics as a dynamic complex system, he brings up a few more implicit questions on:

1. What is the relevance of complexity economics?
2. Why should economists take a look at it add it to their arsenal for research; now being the time?
3. How can the economists and complexity theorists now need to learn from each other and how can the complementarities get leveraged?

Pitch for complexity economics

A departure from economies having constant returns and a unique equilibrium can unveil a view of the economy as a dynamic system in a space defined on the allocation of resources. Country specific external economies can support a number of locally stable equillibria and highlight the complexity wherein there are many basins of attractions for the economy to fall into. The importance of increasing returns, multiple equillibria and the role of history can support hypotheses around the landscape being a complex one.

Krugman draws parallels from other fields like solid state physics, evolutionary theory and cognitive science. The non-linear responses in cognitive sciences have an offering for economics to look into the increasing returns and the dynamic systems within economics with many basins of attraction.

Krugman’s model – with principles of complex adaptive systems

To demonstrate the dynamics of economic systems, he begins with a simple model of his own in which he emphasizes how an abstract landscape can evolve very naturally. The centripetal and centrifugal forces within the simple model coupled with assumptions and simple laws of motions have evolved in complexity. This evolution brings up multiple equillibria and the basins of attraction; the characteristics of dynamic complex systems. It thus has emergent characteristics and increasing complexity but still based on bare principles and simple assumptions. The emergence of the basins of attraction and the multiple equillibria shows how it has weaned away from the traditional economics notions - indicating why complexity economics is beyond traditional economics.

Proof

While the pilot with 3 locations served as a concept of proof complying with past research, he added more complexity (despite letting it evolve) by adding more locations to his pilot. The properties of the model evolved naturally once the number of locations was increased randomly. The natural emergence of the basins of attraction was based crucially on the initial conditions – a hallmark of complex adaptive systems wherein the initial conditions were set and the emergence was observed. The presence of external economies and asymmetry seeded multiple equillibria. The 12-region model has shown the traces of the properties of complex systems – moving from chaos to order, self-organizing in the process and exhibiting simple regularities that emerge based just on initial set conditions. His simulation runs have shared strong similarities and he expects a realistic model may share these similarities. But the emphasis on the initial conditions is vital as it not only can evolve but also can evolve differently based on different initial conditions.

Conclusion

In summary, Krugman raises a question on how economies can be complex systems and then substantiates it with a simple model based on simple rules and assumptions. He extends the model further to bring in more dynamism but observes that the model evolves naturally and the complexity emerges, all based only on the initial conditions. The economy moved towards multiple equillibria – a characteristic of non-traditional view of economics. The approach was adequate as he based this argument on the past published research of his own to provide a theoretical grounding to begin his new investigation.

Computational modeling of city formation – Prof. Scott & Kurt

Questions looked into:

In this paper, the authors look into:

• What factors cause the cities to emerge and have banked their work on finding how transportation costs, elasticity of substitution and percentage of workers in the population can add to the dynamics of city formation.

• In the midst of these factors that can affect the model, the authors have studied the emergence of equilibrium points, how the complex combination of different model parameters influence the result and what happens when random components are introduced.

The approach and how it scores:

The first appreciation for this paper is the grounding in the past theories and research and extending it in the areas which were infeasible by other methods. The authors used computational modeling to work around some of the limitations of the earlier approaches.

The first step was to move away from the assumptions that cities exist and instead have faith in the notion that cities emerge. Focusing on emergence has thus brought in the perspective of how patterns can emerge based on the parameters and interactions enabled at a granular level. The authors have based their modeling on the iceberg principle and Dixit-Stiglitz model which were earlier used by Krugman to study city formations. But where the computational modeling is effective is to bring in two-dimensional modeling which makes including more locations possible and also bring in more specificity in relation to real-world situations. While adding the second dimension is the primary contribution, using computational modeling to conduct detailed statistical analysis of the results provided a more rigorous analysis of the problem with more parameters.

Robustness checks:

During modeling, there were several robustness checks employed to conform to past research as well as validate the new findings of this study. The independent variables were chosen based on past research and this helped validate the model as well as see how the past research translates to the current model. This helped as the first sanity check. Wherever arbitrary assumptions were made, sensitivity analysis was conducted to prevent the assumptions from overly influencing the results. An example is the sensitivity test conducted on the assumption of the city size where the city size was tested over a range to exclude any bias. Before conducting the main simulation runs, the authors validated the model against the past models of Krugman and Fujita et al that served as a validity test.

What I learnt from it:

This article throws insights on how to leverage past research, how to make moves that conform to past research and yet contribute to the extension of the field. I also had a first time look at hypotheses formulation in complexity economics and more importantly what tests to provide robustness to your results. It is to be discussed on how to use regressions in simulation work.

Using Classifiers to Solve Warehouse Problems – Prof. Scott and Kurt

Questions:

  • In this paper, the authors tried to address the question of where to place warehouses and where to place IT infrastructure. This problem was studied in the context of a firm entering a developing country where there are a set of farmers and an incumbent firm.
  • While addressing this question, the authors were implicitly trying to address a much broader question/answer of the possibility of coming up with a more generalizable solution to the warehouse location problem. The authors thus tried to search for the general rules that can be applied to other similar problems and that are not sensitive to the parameters of a specific problem.

Approach:

The authors have set the context of the developing markets and the firm’s location decisions in terms of the warehouses, the competition, the farming community and the IT infrastructure. The authors used the bucket bridge algorithm and genetic algorithms to identify the set of rules that are generalizable.
The authors set up the right expectations during the investigation. They provided a good explanation of how the current methods are problem-specific and are out of touch with the real-world. They also positioned how their method is more suited towards state-dependence. They used MATLAB to program for the simulation.

Proof of concept and validity:

The model despite its first attempt and the acknowledge limitations in the final section has incorporated several features for robustness. Some assumptions were helpful to eliminate the confounding effects in the analysis. The model incorporated 5 elements of the model’s state as it was a state-dependent model. To derive the general rules, classifiers were used as a learning tool for learning the rules. The state-dependence of the approach is better suited for real-world situations.

The authors used the model with the training data as a proof of concept. The more interesting and informative trends complied with the real-world situations of the Burger King strategy and Nicholas Carr strategy; though there were limitations about new markets. Additionally, a comparison with the common trends in operations management provided satisfactory results and the rule’s predictions are consistent with the past research. Genetic algorithms were used to find the best set of rules and it was tested that the best rules and the generic rules matched.

While this work has self-admitted limitations including grounding in theory, it has done a good job within the limitations. It has avoided the past deficient methodologies for solution design but leverage established work for this new model design. As it had worked well with the training data, had reasonable comparisons with the past research and allied fields like operations management, this is an inspiring first step to find generalizable solutions.

Living in Enclaves – Domenicot Parisi and Massimiliano Ugolini

Questions:

In this paper, the authors look into the question of what are the consequences of living in small isolated communities and of living in larger environments as members of bigger communities. Digging deeper, they look into:

a. What are the consequences of living in enclaves? What causes the extinctions in enclaves?
b. What happens to the populations living in enclaves compared with populations free to move in the entire environment?
c. How will the enclave and the whole behave when the artifacts are introduced?

What the model is about:

The authors have used agent modeling similar to Sugarscape and Echo. The resources in the environments were highly variable both spatially and temporally. The initial conditions for the agents and the environment were set based on some fixed and some random parameters and the simulations were run to see how the patterns emerge based on the local interactions between the agents and the environment. The authors noted that extinctions were rampant in the enclaved populations and the authors hypothesize the reasons to be ecological and genetic.

The authors then extended the simulation by bringing in artifacts. The authors observed that the introduction of artifacts could counter the ecological factor of spatial heterogeneity in food availability and thereby rooting out extinctions in enclaves. The authors have increased the complexity of the model in two steps – first without artifacts and then with artifacts. In addition, they have run a third scenario where the agents inherit artifacts from their parents to see the effect on selection. Another scenario was run where the enclaves existed for artifacts than individuals.

My concerns:

There are a few concerns that might help improve the paper and extend it.

Firstly, the authors begin with the assumption about how enclaves are formed in humans but I am not sure if this concern is directly transferable into agent-based modeling; at least to differentiate between humans and preys/predators.

Second, the reasons for extinctions in enclaves during the first run were by themselves hypothesized.

Third, it was unclear about how the heterogeneity in food distribution leads to extinction when the food is being replenished rapidly for dwindling population. This cannot be the cause for the extinction effect when the agents have simple initial conditions to thrive just by grazing. If food scarcity leads to population decrease in the beginning, it should stop once the replenishing happens for the current set of agents.

Fourth, when enclaves are formed, the explanation about genetically transmitted abilities is unclear. It looks more of a forced explanation to suit the results if we consider the case of able agents being in a food-rich enclave. It is unclear if such enclaves emerge at all and if they emerge, it will contradict the second hypothesized explanation about genetic disabilities leading to extinction. It will not be so in at least such enclaves.

My extension ideas:

This enclave model provides scope to investigate extensions in the following areas that bank on the co-operation among nations as a broader shadow.

1. What will happen if trading of the artifacts is allowed among the agents – within the enclaves in the enclaved scenario and in the bigger level scenario without enclaves? - I hypothesize that the extinctions will come down within the enclaves. There will be an overall improvement in the non-enclave scenario.

2. What will happen if trading of the artifacts is allowed among individuals at the bigger levels as well as within enclaves in the enclave scenario? – I hypothesize that there will be an improvement in the overall conditions with the lifespan increasing in both the scenario.

3. What will happen if we treat the enclaves as nations and allow trading of artifacts between/among them:

a. Between two rich enclaves – the rich gets richer
b. Between a rich and poor enclave – the poor will be better off and may avert/delay extinctions
c. Between two poor enclaves – there is a chance of better survival

4. What will happen if trading is allowed among all the enclaves but we impose a kind of embargo on a few as it happens today with some nations? – I hypothesize that the embargoed enclave moves towards extinction while the other enclaves will avert extinctions.

Above all, I will have to find a theoretical grounding for the trading phenomenon and some theories about co-operation among nations if I have to substantiate my work per the above points.

Where to submit:

As this research is published in Complexity journal, I think an extension will be a candidate for the same as it looks that Complexity accepts such themes. Citing published work from the same journal is good during journal submissions and it will come to my aid. If I resort to trading philosophy during my extension, I might even try ‘Computational Economics’ as I am modeling aggregate phenomenon.

My paper will be accepted if I primarily ground it in past research and provide more clarity on the original model before doing my extensions. My extensions should also have a theoretical grounding. My contribution to the enclave theories will be about how trading can affect the prosperity of the nations, how technology and technology transfers can play a role and what are the priniciples behind success/failures of trading blocks. While nations may not prevent others from mass extinction, they may help prevent mass starvations.

Week 13 - That's Interesting!

The Murray Davis paper talks about what makes a theory great and that the capacity to stimulate interest is a necessary characteristic of greatness. The common element of all interesting propositions is that they always deny certain assumptions of their audience. Davis created an index of the interesting which contains 12 logical categories – seven characterizing a single phenomenon and five characterizing the relations among multiple phenomena.
In this semester, we have studied different types of approaches to simulation as known. I will attempt to tie the pieces from this Davis article as well as what we read and talked the entire semester. To begin with, I assume myself to be closer to the commonsense world of Davis than from his expert world for now and may be on a journey from the former to the latter.

Davis says that the best way to trigger interesting research is to attack the assumptions i.e. all interesting propositions always deny certain assumptions of the audience. He briefly mentions it in the beginning of the paper but elaborates on this topic towards the end of the paper.
Brian Arthur reasons for why things become more complex and point to the debatable effectiveness of the sub-systems we create to support the increasing complexity. This can bring into discussion the ‘Function’ propositions. While the secret of well being is generally touted as simplicity, yet complexity is supported as the reason for our existence and advocated that it is a marvel when it evolves naturally to delivers powerful performance. Arthur says that when we seek it as an end or allow it to go unchecked, it merely hampers. I see here that complexity is what seems to be a phenomenon that functions ineffectively as a means of attainment of an end is in reality a phenomenon that functions effectively.

We have discussed several times that the best way to conduct simulation is to first build a model that banks on a theory and its set of related assumptions to first prove the rightness of our model and then play around with the assumptions. Tweaking the assumptions or relaxing a few may be the first step not only to conduct a simulation but also a way to see how it affects the proven theoretical grounding we began with. Sugarscape model in itself is a nice demonstration as no one might have expected an economy to be born in silico due to simple interactions. To be more precise, the authors wanted to create the scape and see what emerges from the bottom up. The emergence of the social structure and the economy was a startling observation which might be beyond the expectation of the ‘common-sense’ world at least that Davis talks about. I personally would have been skeptic had I not seen it for myself. This model would be definitely an interesting phenomenon for a layman; if not for a student like me as it fundamentally attacks the assumption that bottom up emergence with letting silico emerge may not be feasible. Another intriguing feature is the toughness to predict what is going to emerge however learned you are. The sugarscape model in this case may fall into the ‘Organization’ phenomenon based on how you look at it. For a modeler, it looks to be an organized phenomenon with the order emerging. For an outsider, it might look to be disorganized till he sees for himself the order emerging from the interactions of the agents. There is a transition in perception happening based on how you look at it. The broader patterns gel with ‘Composition’ as what seems to be a single phenomenon emerging is in fact based on the actions of heterogeneous individual agents.

The Game of Life represents a partial parallel to the ‘Stabilization’ phenomenon. Despite choosing the rules carefully it is hard to tell whether a pattern will die out completely, form a stable population, or grow forever. If we assume that this is stable due to the rules set in the beginning, what we might assume to be a stable and unchanging phenomenon is in reality an unstable and changing phenomenon.

Sometimes it is attacking the common sense itself instead of or in addition to the assumptions of the commonsense world. The Evolution of Cooperation was an interesting study as I did not foresee tit-for-tat winning every round despite possibly contrary expectations. The ‘Abstraction’ phenomenon seen in the game is again open for two-way interpretation. What seems to be an individual phenomenon of using tit-for-tat emerged as a holistic phenomenon winning in the long run. What looks to be a holistic phenomenon relied on the actions of an individual agent to go for tit-for-tat for it to emerge as the best strategy in the holistic long run. But can we see the evolution of cooperation as an example of ‘Evaluation’ phenomenon?. The answer might be a partial yes. What seems to be a bad phenomenon is in reality turning out to be a good phenomenon. Tit-for-tat literally might have negative connotation for being about equivalent ‘retaliation’. But in the long run is the win-win strategy if leveraged right. Here the literal meaning might be seen as dominated by the contextual meaning as this strategy emerged as the best to foster cooperation.

While Davis talks about a shift needed in how we train the future researchers, it rings a bell about what Resnick advocates for a constructionist approach towards education. What is similar in these two cases from my view is the fundamental shift needed in how we view the current state of things and how they have to be geared up to embrace phenomenon that produce better results in each case.

It is tough to pick two papers that I am fond of as there are more than two. But I would definitely choose one that is about the agent based modeling and related to Sugarscape as this is something I feel to be a kind of path-breaking to do create things in silico by emergency. I would refer here to ‘Seeing around corners’ that resembles/summarizes the simulation approach based on multi-agent emergence. The abstract of the paper itself begins by saying that the real ones are more predictable and more surprising than what we thought. Among others, the paper mentions how Axtell could see Zipf’s law emerge in what we nurtured in silico. The emergences Axtell created throughout his career ally with the organization, composition and abstraction phenomena; let it be Zipf’s law or Pareto principle. Schelling's model described in the paper showed that even the simplest of societies could produce outcomes that were simultaneously orderly and unintended: outcomes that were in no sense accidental, but also in no sense deliberate. This again is close to organization phenomenon. If ‘composition’ stands for at least a single element being common between two sects, the Schelling model speaks for it based on how the segregation happened based on a complex system with collective results that bear no close relation to the individual intent. There is a hint of co-existence here or lack of; there is a generalization insinuated and there is organization, composition and abstraction based on how we look at it.

If I have to choose another paper, there is a tie between the brief textual article by Brian Arthur on how complexity emerges and the other article on El Farol problem. But I will pick the complexity article as it laid a quick foundation on emergence and complexity. As mentioned above, it tangentially relies on Function proposition but has a scope/foundation to explore further into some of the 12 propositions when I read anything in the future.

My paper is about what determines the level of technology penetration and adoption in various aggregate economic conditions. Put alternatively, it is about the study of what factors pertinent to a nation determine the penetration of ICT technologies. The penetration of technology itself has many puzzles attached to it. The perception is that technologies need some level of education, national openness, infrastructure etc to penetrate. But if education can influence the adoption of ICT, there is a strange phenomenon happening in countries like India where nearly 40% of the population is still not considered viably educated. There are different kinds of segregations that are emerging globally to separate technology-haves and technology have-nots. Are these new found segregations detrimental or opportunities? There is an intriguing relation between education and openness of the nations unless they are under dictatorial regimes. What can make my paper interesting is if I can identify the causes or effects for and from more adoption and results of ICT in these countries and how it is leading to better social, economic and intellectual capital. This also might look preliminarily into does ICT matter or may even hit the very idea if ICT leads to economic growth. As I will be leaning towards ABM, I might find scope for generalization, composition, abstraction and correlation among others in my emerging results and the subsequent interpretation.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License