Archive for the ‘systems science’ Category

What’s the plural of Singularity?

October 22, 2017

The Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes — Kevin Kelly

One of the problems with discussing The Singularity, is that there are a number of definitions of the concept. It started with the idea of exponentially improving machine intelligence (AI), then added an associated technology growth, and ended with a biotechnology explosion and human-machine hybridization. So, which one are we to use? Or, can we use any of them? Is The Singularity real?

In a recent essay on the Singularity Web Log, the author raises an issue that challenges the very basis of The Singularity: the claim that technological growth is logistic, not exponential. The difference between the two equations is a limiting term. For example, take population (N) growth over time (t). Population grows at some rate (r).

Exponential:   dN/dt = rN

Logisitic: dN/dt = rN * (K-N)/K

where K is some physical limiting factor, in this case, carrying capacity (see the article for a nice graphic).

Unfortunately, at this point, the essay wanders off into mysticism — K doesn’t matter because that’s a physical, not a machine intelligence concept, the map is not the territory, the machine is not the brain, my imagination is better than your imagination.

So, what about this K thing? Is it really not a limiter on machine intelligence? Is AI really not grounded in the physical world? Stated like that, the obvious answer is, of course it is. And to the extent that it is, it is limited by some definition of K. For the purposes of our discussion, K can be considered an outgrowth of the difference between electrons and molecules, to use Nicholas Negroponte’s phrase. Molecules are heavy, take up space, and are expensive to move. Electrons are essentially free, and can be moved anywhere almost instantly, and almost for free. Shifting publishing from paper books to e-books (still a work in progress) totally changed the dynamics of the industry. This electron/molecule dichotomy is what drives our discussion of K.

Take the most basic definition of The Singularity: that soon we will have the ability to build an AI that is better at designing AI’s than we are. At that messianic point the growth in AI capabilities will become exponential and we cannot foresee the ending. The trouble is, there’s a difference between the concept of a really strong AI and the implementation of the concept. An AI is implemented as computer code running on computer chips. Can this super AI¹ design AI², the next generation of chips and software exponentially faster than humans can? Of course it can, that’s the basis of The Singularity. Can we then retool a $5billion wafer fab to produce those chips for AI² exponentially faster? Can we manufacture the motherboards that will accept those chips? Build arrays of servers and ship them and install them at server farms around the world before AI³ comes down the pike? Perhaps AI¹ can show us how to do it faster, but exponentially faster? For The Information Singularity, K is the interface between the conceptual world and the real world.

When we take the next step, from The Information Singularity to The Technology Singularity, we run into the same K. AI² might be able to design better batteries and lighter cars, but actually building them takes time. And retooling takes time, and those times are not likely to be reduced nearly as fast as the designs are improved.

And finally, the biotechnology, human hybrids, new human race singularities are likely to be the slowest of all. Yes, we will be able to modify DNA to give us healthier bodies, computer-friendly brains, and two additional primary colors, but biology will not be rushed. As the old programmer joke about bringing in more staff on tardy projects goes, it’s like putting nine women on the job so you can produce a baby in one month.

So, it looks like the heart of K as a limiting factor on The Singularity, is time. The Information Singularity will cause computations, or rather, computation-driven decisions, to be made in exponentially less time. But the real-world instantiation of those decisions will still take place in Real World time. What makes this a true constraint on The Singularity is that time is a fundamental concept. The very heart of The Singularity concept is exponential time. If the application of information to molecules has to take place in Real Time, then, like the speed of light, our approach to The Singularity will become slower the closer we approach it.

Now, there is one bright spot here. In the equations above, N was population. In our calculations N would be rate of change of information processing/technology adoption, etc. So dN/dt measures the change in the rate of change over time (and should probably be dT/dt/dt, where T is technology).

The essay I’m quoting from takes a doom and gloom message from an exponential rate of change vs logistic equation rate, because the: “overarching and obvious scenarios are: dramatic change, or relative stasis.” No, they are not.

If the Logistic Theory is correct, the rate of technology change, technology adoption, will at some point level out. For tens of thousands of years, humankin faced essentially zero rate of change. The next thousand years was just like the last. Then things started changing. New technology appeared at such a rate that the next century was clearly better than the last. Then the next decade. Now, we are at a point where, if you wait two years, cutting edge technology will be wildly different. And if we’ve just rolled off the exponential part of the Logistic Curve and onto the flat, that’s the way things will stay — every two years we’ll see major changes in our world.

A fast, steady increase in technology may not be as exciting as a never-ending exponential, but at least you’ll be able to say that some part of your four-year college education is still valid when you graduate.

Advertisements

Multiple Perspectives and the F-35

July 28, 2015

A couple of weeks ago there was a leaked report  on the inability of the F-35 Joint Strike Fighter to defeat an F-16 in a set of Basic Fighter Maneuver engagements (read it here). This set off a firestorm of discussion on the web, amongst those who want to kill the project and those who said the report, and its interpretation, were flawed. There were complaints from the fighter pilot community that anyone with a blog had now become an air to air combat expert.

F-35 and F-16 strike a pose for the photographers

F-35 and F-16 strike a pose for the photographers

Well, IANAFP, but I think there are some aspects of the discussion that have been missed. Let me map the discussion to Linstone’s Multiple Perspectives approach. This will hopefully shed a different light on the arguments, as well as providing a good example of how the Multiple Perspectives approach works.

Hal Linstone, who I had the pleasure to know when I was a grad student in the Portland State Systems Science Program, is a former RAND Corporation associate, and one of the developers of the Delphi methodology. Not Delphi the Object Oriented Pascal product, but a technique for getting agreement amongst experts. He is also famous for the approach to framing a problem that he calls Multiple Perspectives.

Basically, MP says that every business problem can be considered along three dimensions: Technical, Organizational, and Personal.

Technical, as you might expect, holds that a given problem is one of inadequate technology, and that it can be solved by throwing more engineers at it. This viewpoint informed most of the systems development projects at the end of the last century, and its proponents were always surprised when their approach didn’t work out.

Organizational says that many problems occur because of how the organization is structured and what its rules are. Very often something cannot be done because there is no box on the form that can be checked. When same-sex marriage was finally allowed, many counties had problems, because their software wasn’t set up to hand anything other than one male and one female. You might think this is a technical issue, but the root is the failure of the organization to consider the possibility when they wrote the requirements. Counties that still relied on paper forms had the same problem, but at least there they could make a pen and ink correction.

The Personal dimension says that very often the root of a problem is people, sometimes a specific individual in an organization. People interpret the rules, and one individual’s interpretation can differ from another’s. If that person is in a position of power, then their interpretation rules. In an extreme case, an individual might block a technical improvement because they fear that the new technology will harm their job.

Deming’s parable of the red and white beads can be used as an example of MP.  Is the problem a Technical one, of not giving the worker the tools to reject wrong-colored beads? Is it Personal, in that the worker needs better training and motivation? Or is it Organizational, because the worker should never be required to separate out the beads in the first place?

So, how does all this apply to the F-35 in general, and to the air combat discussion in particular? Before we begin, let me say that a number of the arguments presented are spread over different articles, and so you are going to get multiple links to the same article. Now let’s see.

Personal: The argument here is that the flight was a test flight, and that the pilots were looking to accomplish test objectives, not win a bar bet. More to the point, the F-35 hasn’t been around enough for anyone to become an expert in it, and so we haven’t developed tactics for it. This is true as far as it goes, but the main article makes it sound like the pilot was a n00b. He may only have had 100+ hours in the F-35, but I’m pretty sure he’s a multi-thousand hour test pilot.

Technical: Two points stand out. First, not all the F-35 technology was available, off-boresight aiming being the most important example. Second, as with your car, changing the performance of a modern fighter is mainly a matter of changing the software. You don’t optimize your carburettor any more, you reburn the EPROMS. So too with today’s computers-with-wings. Indeed, one of the reasons for the test flight was to define areas where the software needed tweaking.

Organizational: The current employment concept says the F-35 should never have to dogfight, just as a combat Marine should never have to engage in hand to hand combat, except as a last resort. The idea is to use it as a networked sensor platform and employ the full range of US weapons, including long range AAMs and SAMs, while using the stealth to keep from being detected. This approach was demonstrated using a commercial air combat game.

My Two Cents

Personal: I have nothing much to add here. Our pilots and aviators are the best in the world, with more flying time than the pilots of any other country. We may have cut back training hours due to sequestration funding, but the worldwide operations tempo continues unabated. The Russians, and the Chinese have, historically, gotten what one of my commanders used to call “just enough flying hours to kill you.”

One of the articles notes that the only people who are really competent to comment on the J-35 capabilities are the program managers with the appropriate clearances, and the rest of us are, essentially, sitting with our backs to the fire, trying to interpret the shadows. This is certainly true. On the other hand, I can tell you from my years at the Pentagon that it’s also true that program managers will lie, and will leak classified information to support their programs while suppressing unfavorable evidence via overclassification. On the other other hand, “any stick will do to beat a dog”, and much of the furor over the test report is being raised by people who are against the F-35 for other reasons, such as cost, or “not produced in my district”.

Technical: My issue here is what might be called the historical component of the technical perspective. The F-35 supporters pooh-pooh the comparisons with the F-4 and F-105 in VietNam, pointing out the tremendous differences in weapons capabilities since then. This is correct, but misses the point. At a more abstract level, in the early 1960’s we had a concept of what an air war would look like, given the new weapons systems, and we designed our force structure around that concept. When the war actually started, it turned out our weapons didn’t perform the way we thought they would, and the hostile environment was different from what we thought it was going to be, and we ended up with deficiencies that took a couple of years of combat to overcome. Years.

Organizational: From the discussions, the employment concept for the F-35 is much like our ideas of how the early hours of WWIII in Europe would roll out — clouds of their fighters meeting clouds of our fighters, and stay inside your root cellar lest you be hit by falling debris. Or set piece engagements in narrowly defined regions, like the Gulf, or the Baltic. All of them seem to be based on a networked and ‘weapons free‘ scenario where, on a good day, you shoot all your missiles Beyond Visual Range, and head home in time for Happy Hour.

The problem is, IMHO the most likely future conflicts will be narrowly constrained affairs, where third-party neutrals will be going about their business while you fight. Think of Pratchett’s “melee coming through“. During the Tanker War in the Gulf, everyone continued to operate commercial shipping and airlines, with sometimes disastrous results. If my quick check on Orbitz is correct, there’s something like sixteen flights from Tokyo to Singapore per day, all of them flying in the vicinity of Taiwan. It’s entirely likely that the F-35 will have to operate in an environment where the Rules of Engagement require visual ID before weapons launch.

UPDATE: Here is a much more detailed discussion of flaws in the F-35.

The bottom line is that these issues are much more complex and nuanced than a simple blog post on turn rates and energy levels would have you believe. The proof of the pudding won’t be found for another five years or so, when all the teething troubles and upgrades and tactics have been worked out. Most of the current discussions are about “did we build the system right?” A much longer blog post is needed to discuss the key question, “did we build the right system”?

SCIS-ISIS 2012

August 21, 2012

The 6th International Conference on Soft Computing and Intelligent Systems and the 13th International Symposium on Advanced Intelligent Systems will be held at the Kobe Convention Center in the Kobe Portopia Hotel next November. I have two papers submitted. Or, I should say, we have two papers, because in this business you don’t get anywhere without a lot of help from your friends. UPDATE: Both papers have been accepted, which is why I’m posting this here and now, after a couple of false starts.

The first paper is on the application of a Systems Science tool called Reconstructablility Analysis to understanding the genetics of Alzheimer disease. Here’s the abstract:

Reconstructability Analysis (RA) is an information- and graph-theory-based method which has been successfully used in previous genomic studies. Here we apply it to genetic (14 SNPs) and non-genetic (Education, Age, Gender) data on Alzheimer disease in a well-characterized Case/Control sample of 424 individuals. We confirm the importance of APOE as a predictor of the disease, and identify one non-genetic factor, Education, and two SNPs, one in BINI and the other in SORCS1, as likely disease predictors. SORCS1 appears to be a common risk factor for people with or without APOE. We also identify a possible interaction effect between Education and BINI. Methodologically, we introduce and use to advantage some more powerful features of RA not used in prior genomic studies.

The second paper used another Systems Science technique, agent-based simulation, to test Herb Simon’s theory of satisficing:

Satisficing is an efficient strategy for applying existing knowledge in a complex, constrained, environment. We present a set of agent-based simulations that demonstrate a higher payoff for satisficing strategies than for exploring strategies when using approximate dynamic programming methods for learning complex environments. In our constrained learning environment, satisficing agents outperformed exploring agent by approximately six percent, in terms of the number of tasks completed.

In a later post, I’ll talk about the collaboration that led up to this.

Group vs Individual Selection in Evolution

August 21, 2012

Last June’s Edge had an interesting essay by Harvard professor Steven Pinker on why the idea of group selection as an extension of natural selection is wrong. His position is to take the baseline definition of evolution by natural selection

The core of natural selection is that when replicators arise and make copies of themselves, (1) their numbers will tend, under ideal conditions, to increase exponentially; (2) they will necessarily compete for finite resources; (3) some will undergo random copying errors (“random” in the sense that they do not anticipate their effects in the current environment); and (4) whichever copying errors happen to increase the rate of replication will accumulate in a lineage and predominate in the population. After many generations of replication, the replicators will show the appearance of design for effective replication, while in reality they have just accumulated the copying errors that had successful replication as their effect.

rewrite this in terms of groups
(more…)

Grad Students

April 28, 2012

I went through the Systems Science PhD program at Portland State at the end of the last century. I just got back from a quick trip down there to discuss two different research projects, and to give an impromptu presentation on one of them.

I currently teach in a College of Business. It is a great job, with superb colleagues and fun students…and yet. Most business schools don’t really have grad students. They have students going for their MBA’s. The difference is, most folks going for an MBA have been out in business for a while, have a job and a family, and are getting the degree as a way of moving up into management, or into upper management. It’s a straightforward practical program for those who have a life.

In other disciplines, like Systems Science, the grad students are there to continue to develop as researchers in their academic discipline. They often don’t have families. They often don’t have jobs outside the school, unless you count ones that require you to wear a paper hat. They very often spend their time working for their professors, doing research and teaching.

That struck me as I was sitting in the Systems Science building at PSU, listening to the grad student, and student/professor discussions that went on around me. My right ear was getting snippits of a discussion of biosystems simulation, one that ranged from cell diffusion to the language of bees, and the issues associated with writing code to support it. My left ear was picking up discussions of symmetry breaking in physics and information theory, and of frozen accidents in evolution. In my presentation, on agent based simulation, the discussion ranged from fractal networks to random boolean networks to the desired level of expertise in a field. Afterwards, I sat in a group that discussed the importance of good data and consistency in phenotype definitions for GWAS analysis. Note that all of these were associated with actual research issues, and weren’t just late night beer-fueled gabfests. Those came later.

Good as my job is, I miss that kind of wide-ranging, yeasty, no limits discussions.

Bruce Schneier on Security

June 18, 2011

TED talks are an excellent set of 20min vids on interesting topics by interesting people. Here is security expert Bruce Schneier on emotions, models, and reality.

Robotalk – Much less than meets the eye

May 24, 2011

Or ear.

IEEE Spectrum has an article on two robots learning to communicate.

LingodroidMap
Lingodroid mapping maps

The thing is, as far as I can tell from the IEEE article (the press versions being less than useful), the two ‘bots were programmed to map their surroundings (presumably to a fine distance scale) using laser and sonar sensors. They were programmed to exchange gross location data, said data being a random string of consonant-vowel pairs. They were programmed to use this information to establish range-and-bearing data, and to exchange that data via a random string of consonant-vowel pairs. They were programmed to play games like “let’s go to pize”, and to compare their locations when each had thought they’d gotten there.

As a result of this programming, they were able to adjust their internal tables that mapped precise locations to coarse positioning words (the positions were coarse, not the words), such that they generally agreed that this region of precise mapping should be designated as that region of coarse mapping.

This is an interesting development, but it’s more along the lines of a good application of fuzzy logic than it is “development of language.” Essentially, they are creating ‘lingustic variables’ and then defining various membership functions on those variables. They have been programmed to be able to adjust those membership functions so that both of them agree on their shape and location.

So, did they reinvent fuzzy logic, or did they just apply it as programmed? IEEE doesn’t say.

Systems, General Systems, Systems Dynamics, and the Earth 2

May 15, 2011

Good artists copy, great artists steal. I just leech. Once again, Vandana Singh has come up with an excellent essay over on Strange Horizons. There, she talks about the importance of not just preserving biodiversity, but restoring it. This is important, because an ecosystem is a tightly woven mesh, pulled taut over an uneven, spikey, even, fitness surface. Cut one thread and the mesh twists and distorts. (more…)

Obama vs Osama 2

May 11, 2011

There are two other issues that I skipped over in my original writeup. First is the question of mixed vs pure strategies. Second is the question of value assignments.

In game theory, when you don’t have a dominant strategy, or a saddlepoint (where both players have no incentive to move), you have two choices. First is to go with a pure strategy. That means you pick the high payoff strategy and stick with it. That automatically means that you will lose a certain percentage of the time. Not only that, your opponent can detect this, and change their strategy accordingly. Your second choice is to go with a mixed strategy, playing the two feasible strategies in combination to maximize your payoff. You automatically lose part of the time, but you will pick a secondary strategy when a secondary strategy is needed, just often enough to make it worthwhile. The problem comes when you have a one-shot game — you can’t depend on the wins and losses evening out in the long run, because there is no long run. Traditional analysis suggests you still use some random number generator to decide what to do, even if it’s only the once. Depending on the relative likelihoods, this might be a good idea. Since I doubt that President Obama used formal game theory to make his decision, I think it’s also unlikely that he rolled a D20 die to determine the outcome.

The second issue is that of assigning values to the outcomes. My values were highly subjective, not to say, rigged. You may conclude that pattern-bombing Abbottabad would have a larger downside than shown. That’s fine. That’s perfect.

You see, what building this game does is force you to make your assumptions explicit. The same holds true for modeling and simulations and other kinds of games. Many times when people are arguing over something, and it seems like they are arguing past each other, it’s a case of one person having one set of assumptions about how the world works, and another person having a different set of assumptions. Forcing us to bring these assumptions out into the open means we can start arguing about the assumptions, not some presumed second order effect that occurs if they’re true. There are a number of assumptions made in the OvO game. One is, what is the probability of an ISI leak if we tell them. Greater than 50%? 90%? Another is, what’s the difference in payoff among the three strategies, assuming we get OBL. There are others, but they are left as an exercise for the reader.

Obama vs Osama

May 6, 2011

While I was prepping for a class last week, it struck me that part of the decision-making process for the Osama bin Ladin raid could be modeled as a game theory exercise.

Setting the Stage

CIA believes OBL is living in a compound in Abbotabad, PK. We have three strategies for taking him out.
1. Two B-2 bombers, carrying 16 x 2,000lb bombs each.
2. Helicopter raid by Special Operations Forces
3. Joint raid with PK government forces.

Each of these has advantages and drawbacks. How do we decide what to do? Enter, game theory. (more…)

Systems, General Systems, Systems Dynamics, and the Earth 1

March 13, 2011

I just finished reading an essay by Vandana Singh, Author, Friend, Bloggatrix, and -ePal, over on Strange Horizons. The essay begins halfway up a cliff in the Himalayas, and ends with the idea of rewilding nature and the promise of future essays on the why and the how. I recommend you go over there and read it. Don’t worry, you won’t miss anything — not much happens in this corner of the Internet.

Much of the essay deals with interconnectedness, and the human response to local emergence. I am inspired to talk about the same topic (thanks, Vandana), but in a more formally Systems Science fashion. (more…)

God hates shrimp – part 2

November 10, 2010

Last week, inspired by this shrimp tale, we talked about scientific theories (AKA models that explain observations) and showed how Darwin’s Theory of Evolution, updated via DNA, is a good description of what we see today, and a good enough predictor to use for commercial products that make money, today. What about the past? Is there anything that could disprove the Theory of Evolution as a descriptor of the ancient past? What would it take? Well, based on what we’ve said so far, you’d have to show that its predictions were wrong, or that it contained an irreconcilable internal contradiction, or that your model was better. More to the point, since modern day evolution theory (currently) passes all the tests, you have to show that when it is applied to the past it creates incorrect predictions or logical inconsistencies, or that there is a better model. (more…)

God Hates Shrimp – Part 1

November 3, 2010

This article in Science Daily got me to thinking about evolution and science.

A while back, when I was talking to a colleague about science, she asked what proof scientists would need to disprove evolution. I glibly said “but we know it works”. Which is a cop-out. Let me make another attempt.

First, let me repeat what I’ve said elsewhere about the language of science. To a civilian, the word “theory” means “a hunch”, but to a scientist it means “a model for explaining observed phenomena”. You can’t prove a theory. All you can do is disprove it. (more…)

Systems and Afghanistan

September 6, 2010

Michael Yon links to an interesting article by Rice and Filippelli on using technology to fight corruption in Afghanistan (I am using his version because it’s easier to read). As usual, I have grave doubts about the likely success of any technological solution to a complex societal problem. I have written about the Multiple Perspectives issue before, and I think it applies here.

In a nutshell, systems scientist Hal Linstone posits three Perspectives on any organizational problem — Technical factors (how the associated technology works), Organizational factors (how the rules of the organization are structured) and Personal factors (how key individuals see the problem and the issues surrounding it). In the computer field, most IT people think in terms of technical solutions to problems. Most of the time their solutions don’t work the way they think they should, because of the other two. To pull an example off the top of my head, what is the use of an ultra-secure voting machine in promoting democracy, if the law limits voting to males, and the president of the country thinks it ought to be males with property?

In the Rice and Filippelli article they point out how using cellphones for salary payments to police and soldiers cut out the middle-men, who were all corrupt, and actually sent all the money to bank accounts belonging to actual people — no skimming and no payroll padding. The police and soldiers involved thought they’d gotten a substantial pay raise, when all they got was their true salary.

So, say R&F, why not move more of the payment system onto cellphones? Well, this is why not — the Kabul Bank is in danger of collapsing due to corruption and fraud. A cellphone based economy is subject to the same kinds of problems a paper economy is, just in a different form. If you have a corrupt banking Organization, and Afghanistan’s appears to be very corrupt, the kleptocracy will find ways to steal the peoples money from the bank. And, if the top People involved, like the president and his relatives, view the country as their own personal ATM, there will ultimately be little done to correct it. After all, from the view of the people on top, it is working.

I think that R&F’s point is a useful one, and I’m not saying don’t do it. I am saying that the Technical solution is not a panacea, and that we have to attack the problem on a wide range of fronts. Of course, that assumes that we have the power to do so. In this case, the Organizational kicker is that Afghanistan is a sovereign country that doesn’t need to do what we say.

UPDATE: Here is another example of how making an improvement in technology doesn’t always improve things.

Wednesday Wii – Train your Brain 1

June 30, 2010

In artificial neural networks, when you are helping your net learn how to respond to inputs, the tried and true approach is to divide your dataset into three parts – train, test, and validation. The first, and biggest, part, is the training set. Think of this like the driver training track at a well endowed high school. The neural net runs through this dataset many times, perhaps thousands of times, in order to learn how to respond to the inputs. You train it, and train it, and every so often you run it through the training set again, in test mode, to see how well it has learned. Then you go back to training. Much less frequently, you show it the second batch of data. This is the much smaller test set. The object is to see how it works with data it hasn’t been trained on. Think of this as taking your high school driver out on the local roads. You know your system has learned all it is going to when the results on the test set start to diverge from those on the training set. At that point you give it its drivers license test by showing it the third, and usually smallest dataset, the validation set. You only get to use this dataset once, and it tells you how ready your neural net is to face the wide, wild, world.

What has this to do with the Wii Fit, you ask? Well, the Wii Fit is training your brain as well as your body. In the balance tests, the key is to get your nervous system to respond to the inputs from your balance organs so that you can stand on one leg without falling on your butt. Think of all those exercises, from the Tree Pose to the Single Leg Extension as the training set. As you do them, your internal neural net system learns to keep your balance under a wide range of conditions.  Periodically, you get to take the Body Test, where you are exposed to, yes, a test set of actions that are not the ones you trained on. And therin lies the rub. If you go into the My Wii Fit Plus section, you have the opportunity to build yourself an exercise routine. In addition, if you click on the little Wii helper you get a chance to practice the various tests it gives you during a  Body Test session. This is generally a bad idea, because what you are doing is training on the test set, and so I refuse to do it, with two exceptions, that I’ll tell you about next time.

Brin on Climate Skeptics and Deniers

February 12, 2010

David Brin is a physicist and author. Here is a well structured discussion of the difference between those who are skeptical abut human generated climate change and those who deny it is happening. It refers back to an earlier essay on the same topic, one that shows how we should be doing all the things needed to counter HGCC, even if HGCC were false, because they are all long-run beneficial.

Still fighting a sprained thumb. I have it braced and am keeping it warm, but I find I can’t simultaneously keep it warm and sit at my computer, so blogging will be even more anemic than usual.

Global Warming 2

December 21, 2009

EurekAlert reports on a Chemical & Engineering News analysis of the global warming debate. The bottom line of the discussion is that both sides of the debate agree that atmospheric carbon dioxide has increased since the 1700s, that most of the increase is due to human activity, and that global temperatures have increased since the 1850s. The disagreement is over causality — did the increase in CO2 cause the increase in temperature, or is the increase due to some sort of natural cycle? I’d say they’re asking the wrong questions.

The right question is this: given that global temperatures are rising (for whatever reason), and given that human activity emits a number of known greenhouse gases, and given that controlling greenhouse gas emissions is the easiest (effectively our only) way to push back against the warming, should we not do everything we can to limit those gases? Note well what I am saying. We may not know everything about warming. We may not have caused it. Human greenhouse gases might only effect X% of a given increase in temperature, where X is the number being fought over. It might be 100%, it might be 50%, it might be more — or less, although I doubt any reputable skeptic would claim zero. To the extent that it works, and to the extend that it’s the one lever we can put our hands on, should we not use it to fight the known warming trend?

Global climate is a complex dynamic system, possibly even a chaotic one. As such, it appears to have what are called tipping points, where it can be thrown from one basin of attraction to another. We have seen this in the past. Dynamic systems are susceptible to control through fairly subtle means. For example, Peter Senge talks about the idea of a trim tab, as used on big aircraft before we had power steering. The pilot isn’t strong enough to move a rudder the size of a barn door against a 300kt wind, but he can manipulate a smaller trim tab, a sort of rudder-for-the-rudder, which pushes the big control surface where he wants it to go. We don’t know enough about climate dynamics to say what would work for sure, but we can certainly hope that greenhouse gas control can prove to be an effective trim tab, even if some would doubt that it can control the entire rudder.

Hal Linstone, of RAND corporation and Portland State University Systems Science program, talks abut multiple perspectives as a way of understanding problems. In this case he isn’t talking about multiple individuals, but about multiple ways of approaching a problem. The three he concentrates on are the Technical, Organizational, and Personal perspectives.

The Technical perspective tends to think of all problems as technical ones, with technical solutions. If the problem is global warming, then put solar shades into orbit to block the sun, or dump iron into the broad ocean to encourage plankton to grow and eat the CO2, or, yes, put scrubbers on factories to pull CO2 out of the exhaust gas. Purely technical solutions often fail because they run afoul of limits imposed by Organizational or Personal issues.

The Organizational perspective asks if changing the rules about how an organization works might not help solve a problem. If you change the rules, you change the game. If companies have to pay the cost of polluting the commons with their effluent, perhaps they will find better ways to do things. On the other hand, if a set of rule changes is going to make it harder for a company to operate, that company may well oppose the changes.

I think that the Organizational perspective is one that argues for our current emphasis on cooling the Earth by finding ways to limit greenhouse gases, and doing this by changing the rules of the game. That kind of solution may cost money (but we keep hearing reports of how it can improve profits), but the costs are spread around a great many people in a great many countries. Most of the purely technical solutions are “point” solutions, where one country, or group of countries, has to continuously appropriate enough money to, say, put an enormous solar shade in orbit and keep it there. Organizationally (i.e. politically), that’s much harder to do.

The Personal perspective says that people matter, that key decisions might go one way or another, depending on who is making them. The difference in approach to warming by Presidents Bush and Obama is the most obvious example, but a 60 year old owner of commercial real estate in Miami, Florida might not support policies that will hurt him financially in the short run yet will only prevent his property from flooding half a century from now.

In my less charitable moments, I get the feeling that most of the global warming deniers are doing so because it is organizationally or personally advantageous to do so. To quote Upton Sinclair, “It’s hard to get a man to understand something, when his salary depends on his not understanding it”. This is, of course, shortsighted, and gives no thought to what the future might bring, but to throw out another quote, Marx this time*, “Why should I care about posterity? What’s posterity ever done for me?”**

So, there’s my solution, the one we knew of from the beginning, and the one we will come back to once the evidence starts sloshing ankle deep on Wall Street. Of course, by then it will be too late.

*Not that Marx, the other one
**Thanks, Kurt, for the correction.

Getting Out of Afghanistan: He says it better

November 22, 2009

This just in from Ron Cole’s Informed Comment, William Polk echoes my sentiments, but does it a lot better: Let America be America, and Depart Afghanistan. He lists mistakes we have made that make it impossible to win, discusses the costs in lives and treasure of possible alternatives, and lists the one thing we can do to end up better off than we are now — get out, but do it in such a way that we encourage the AF traditions of governance that kept the country stable for hundreds of years.

There is a concept called Dynamic Programming. It’s more like linear programming than computer programming and is designed to find an optimum path from where we are now to where we want to be. It is used in finance (I have put X% of my money into stock A, what do I do with the rest), supply chain management (I have X% of my inventory in warehouse A, what do I do with the rest), and project management (I have completed X steps towards my goal, what’s the rest of the path). There are problems with its strict application, because formally it requires computing all paths. That’s not usually possible, and so you have Approximate Dynamic Programming. The key point behind it, the takeaway lesson for AF, is essentially a restatement of sunk cost. You are where you are. However you got there, whatever the decision process or cost, the only thing you can do now is to optimize the rest of your path to your goal.

Jump to a different topic — Cybernetics. Cybernetics isn’t computers, it is control of dynamic systems. A quick and dirty description would be that you compare goals and results. If the results are not moving towards your desired goals, you take action to move them that way (traditionally, we insert a discussion of thermostats at this point). If you are consistently failing to achieve your goals, your choices are to expand the range of possible actions, or to change the goals. I will have a longer post on this topic after the end of the quarter, but suffice it to say that in AF, we have established (depending on who you read) a set of impossible goals, or no coherent goals at all. Either way, our range of possible actions is limited. A simple change of goals to something simple — a stable AF — and a clear-eyed recognition of the actions that would bring it about, would be a preferred way to establish a path that will lead us out of that country.

Global Warming 1

November 21, 2009

Over on Antariksh Yatra, there is a discussion of the state of the effort to fight global warming. The problem is that, as I see the Rules of the World seeing it, nobody is going to be seriously inconvenienced by global warming — at least, nobody worth mentioning. Yes, we’ll have bigger hurricanes, but the insurance companies are already factoring that into their price structure. Yes, we’ll have sea level rises, but that can be fought with tax increases on the middle class to finance dikes and pumps and things. Drought and famine and starvation? Well, they’ll happen in those dusty countries where they already happen. None of that is going to hurt this quarter’s earnings. In the far future, say, ten years out? There’s always winners as well as losers, and the winners can always find a bigger fool to buy their condo in Miami beach, and then just shift the money across national borders and buy beachfront property in Vancouver or Halifax or some other future resort area. It’s not that they don’t understand the implications of global warming, it’s that understanding it is not in their best interests — some homespun philosopher once said that it’s hard to get a man to understand something if his paycheck depends on him not understanding it. There are more optimistic views, for example, Sara Robinson over at Orcinus. Note: I have later comments on the problem, from a systems perspective.

Hone on getting a PhD

October 4, 2009

Good series by David Hone on Advice for Young Researchers. He is a paleontologist, and it shows, but the information is still generally applicable to a wide range of disciplines, and I’d strongly recommend that anyone thinking about starting in any PhD program read the series.

One section, on getting a PhD, is particularly interesting, and has started me thinking:

I have found that every discipline, every school, every program, is different. Some engineering PhDs, for example, simply require the publication of three or five peer reviewed articles in a give subject area. In some, it’s easy to generate your own data, because you are running your own experiments. In others, like business, you often need surveys, or (in paleontology) you need to visit museums and measure bones.

Hone talks about using your Supervisors (what I’d call the Dissertation Committee) as a resource, but not to pester them. I’d think of them as being like the Board of Directors of a new company. When you select them, you want to pick people who are experts in the areas your dissertation will cover — either the topic itself, or the tools you will use. Meet with them often, perhaps once or twice a year — it’s amazing how many people meet with their committee twice, at the start of the effort and four years later, the week before the defense. You are not meeting to ask them questions, you are meeting for them to ask you questions, so you can refine your approach. Then, since they brought it up, you can ask them questions.

One technique that I saw applied successfully in a history PhD, and that I tried, with marginal success, in my systems science PhD, was to make every class a dry run for part of the dissertation. Most classes required a paper. It’s helpful to think of that as a chance to do a literature search and descriptive writeup for one of your dissertation chapters. If it’s an applied class, like simulations, it’s a chance to build the tools you will need to do your work. Your goal should be that something out of each class should find its way into your dissertation. Of course, that assumes that you know your dissertation topic early in your PhD career. I changed my topic probably four times, starting with getting my original proposal shot down in flames. Still, the early concepts stayed on through the whole process.

Hone’s series is good, and useful. Go read the whole thing.

The Falafel Bullwhip

August 9, 2009

My friend, Kurt, who blogs here at WordPress in his Zephyr 98, has a cute essay on a sandwich. In it, the store where he shops had a shortage of falafel, and so couldn’t make his favorite sandwich. This went on for weeks. Then, suddenly, they had falafel again — tons and tons of falafel. The resulting sandwich was…mmm…Rubenesque, bordering on Dionysian. But the curvacious sandwich isn’t what we are on about in this essay. It’s the falafel, and the oversupply thereof.

You see, Kurt’s sandwich shop appears to be the victim of a common phenomenon in the supply chain world, known as ‘the bullwhip effect’. When a multi-stage supply system has delays built into it, then pent up demand can result in a sudden oversupply, and your inventory flails around like a bullwhip. The best example is in the ‘Beer Game’ AKA the ‘Beer Distribution Game’, for those who don’t want their motives misunderstood. Here’s the wiki on it:

http://en.wikipedia.org/wiki/Beer_Distribution_Game

and there’s resources that will let you play online. Go ahead. Play it. I’ll wait.

http://www.beergame.org/

http://beergame.mit.edu/guide.htm#Simulation

If you check my Summer Reading entry of a month ago, you will find Peter Senge’s book The Fifth Discipline (1990). He made the Beer Game popular, and tied it to Systems Dynamics. One of the interesting findings he made, from years of using the game, is that the bullwhip effect can occur even when you know what’s going on and try to fight it.

I strongly reccomend the book, Kurt’s blog, and fat falafel sandwiches.

Reconstructability Analysis

July 7, 2009

Reconstructability analysis (RA) is an information- and graph-theoretic methodology which originates with Ross Ashby’s constraint analysis and was subsequently developed by several others. RA resembles log-linear methods used widely in the social sciences, and where RA and log-linear methodologies overlap they are equivalent. RA also overlaps with Bayesian networks. In RA, a probability or frequency distribution or a set-theoretic relation is decomposed into component distributions or relations. When applied to the decomposition of frequency distributions, RA does statistical analysis. RA can model problems both where “independent variables” (inputs) and “dependent variables” (outputs) are distinguished (called directed systems) and where this distinction is not made (neutral systems). Being based on information theory, which ignores metric information in the variables being analyzed, RA is a natural methodology for nominal, e.g., genomic, data.

Right now, I’m looking at how RA compares with Logistic Regression. They produce identical classification rates for low penetrance genetic data, but RA appears to be easier to use — you don’t have to create dummy variables, and the results look to be more directly readable.

Systems and Hierarchies

April 4, 2009

So, if a system is a collection of things that interact (A), and at a higer level it is a single entity (B), with attributes, that means it’s already a hierarchy. Not just a natural hierarchy, but one that can’t exist without being one. And since, due to the recursive laws of recursion, an entity at any level (including the B-level we just talked about) can interact with other entities at the same level (call this view A’) to produce a system, then we can have a hierarchical structure such that it’s “systems all the way down.”

This means that a system is hierarchical, not in a bad kings->nobles->peasants way, but in the same organic way that your body is: body->kidney->nephrons. On the other hand, this doesn’t mean that the kings->peasants thing isn’t a system, it means that there’s more going on than the kings and nobles idea captures, and a simple “Ruritania owes allegiance to Hentzau” approach won’t work.

Next: System boundaries, or “Systemness is in the eye of the beholder”

Systems Theory

March 14, 2009

My main passion is Systems Science, sometimes called General Systems Theory, or Complexity Theory, or the Theory of Complex Adaptive Systems. It’s the study of how things fit together, and we don’t really care what those things are.

Systems Science is sometimes set in opposition to Cartesian Analytics — Rene Descartes’ invention for understanding problems by breaking them down into their component parts. It originally grew out of biology, where people studied, say frogs, by breaking them down (they called it dissecting) into livers and spleens and things and studying each in turn. This is a tremendously powerful technique, and has driven most of our advances in science since, well, science. The trouble is, when you do that, you lose a certain frogginess. You no longer have a frog, you have a plate of giblets, because a frog exhibits behavior that no single organ does. Systems Science calls collections of interacting things like this a system, and it’s these systems, and their emergent behaviors, that form the basis of our studies.

Of course, we are not limited to biology. Consider an automobile. What’s the purpose of an automobile? To get you from here to there. There are other possible purposes — storage, romance, many things — but we’ll just consider this one. So, the systems scientist says, what part of the automobile gets you from here to there? The answer, of course, is “all of it”. An engine isn’t any use without wheels, and you actually need five wheels to be useful (one at each corner, and one in front of the driver). Even the caveman transport in the old BC comic needed two parts, a wheel and an axle.

Next time, we’ll talk about boundaries and hierarchy in systems.