Thore Husfeldt

Archive for the ‘Exposition’ Category

The Monster in the Library of Turing

with 7 comments

Appears (in Swedish) as T. Husfeldt, Monstret i Turings bibliotek, Filosofisk tidskrift 36(4):44-53, 2015.

[As PDF]

Nick Bostrom’s Superintelligence presents a dystopian view of strong artificial intelligence: Not only will it be the greatest invention of mankind, it will also be the last, and this emerging technology should be viewed as an immediate and catastrophic risk to our species, like molecular nanotechnology, nuclear power, or chemical warfare.

Nick Bostrom, Superintelligence, Oxford University Press, 2014.

Nick Bostrom,
Superintelligence, Oxford University Press, 2014.

Artificial intelligence with super-human capabilities will be the last invention controlled by Homo sapiens, since all subsequent inventions will be made by the “superintelligence” itself. Moreover, unless we are extremely careful or lucky, the superintelligence will destroy us, or at least radically change our living conditions in a way that we may find undesirable. Since we are currently investigating many technologies that may lead to a superintelligence, now would be a good time for reflection.

Nobody knows, much less agrees on, how to define intelligence, be it general, artificial, or strong. Neither does Bostrom. By his own admission, his book is inherently speculative and probably wrong. Not even the rudiments of the relevant technology may be known today. However, many of Bostrom’s arguments are quite robust to the particulars of “what?” and “how?”, and the book can be enjoyed without a rigorous definition. For now, we imagine the superintelligence as a hypothetical agent that is much smarter than the best current human brains in every cognitive activity, including scientific creativity, general wisdom, and social skills.

Pathways and Consequences

Bostrom describes a number of pathways towards a superintelligence. They consist of current scientific activities and emerging technologies, extrapolated beyond human cognitive capabilities. These include

  • neuroanatomical approaches, such as simulation of whole brains (Einstein’s brain in a vat or simulated on a computer, neurone by neurone),
  • genetically or artificially modified humans (brain implants, parental gamete selection for intelligence),
  • intelligence as an emergent phenomenon in simpler networks (the internet as a whole “becomes conscious”),
  • computer science approaches like machine learning (IBM’s Jeopardy-winning Watson) and “good old-fashioned artificial intelligence” using symbolic reasoning (chess computers).

None of these technologies is currently close to even a dullard’s intelligence, but on a historical time scale, the are all very new. Bostrom’s description is an entertaining tour of computer science, neuroanatomy, evolutionary biology, cognitive psychology, and related fields. It is moderately informative, but none of it is authoritative.

The consequences of any of these research programs actually achieving their goals are even more speculative. Several futurists have developed scenarios for how a superintelligent future might look, and Bostrom surveys many of these ideas. In one of those visions, accelerating and self-improving digital technologies quickly overtake human cognitive abilities and transform our environment, like we transformed pre-historic Earth. After a short while, all the protons that currently making up the planets of the solar system are put to better use as solar-powered computing devices, orbiting the sun in a vast networked intelligence known as a Dyson sphere. There are many other scenarios, partly depending on which approach will turn out to win the race against our own intelligence, and how the result can be harnessed. Not all these possible futures are dystopian. Some even leave room for humans, maybe uploaded to a digital version of immortality, or kept in zoos. But they certainly entail dramatic changes to our way of life, easily comparable to the invention of agriculture or the industrial revolution.

The Control Problem

Bostrom’ book begins with a short fable, in which a group of birds agrees to look for an owl chick to help them with nest-buildling and other menial labours. We immediately see their folly: The owl chick will quickly outgrow the birds, throw off their yoke, and quite probably eat them, as is its nature. The birds would have been better off thinking about the result of their search before it was too late.

However, in its most harmless form, the superintelligence is merely a device that excels at goal-oriented behaviour, without intentionality, consciousness, or predatory tendencies. Bostrom explains that even such a superintelligent servant, docile in comparison to the rampant killer robots from blockbuster movies, would be a terrible thing.

Bostrom describes an interesting thought experiment about how to specify the behaviour of a superintelligent machine that produces paperclips. There are at least two undesirable outcomes that stem from “perverse instantiation” of the machine’s task. One is that the machine might turn everything into paperclips, including humans. Or, faced with a more carefully worded task, the machine first solves the problem of increasing its own intelligence (to become a better paperclip-producer), turning the entire solar system microprocessors. No matter how much we continue specifying our orders, the machine continues to avoid “doing what we mean” in increasingly intricate ways, all with catastrophic outcomes.

I like to think of this idea in terms of Goethe’s Zauberlehrling, immortalised by Mickey Mouse in Disney’s Fantasia. In this tale, the sorcerer’s apprentice commands his enchanted broomstick to fill water into a bathtub. Obedient and literal-minded, the magical servant hauls up bucket after bucket, yet continues long after the tub has been filled. Disaster is averted only because the sorcerer himself arrives in the nick of time to save Mickey from drowning. No malice was involved; unlike Bostrom’s allegorical owl-chick, the broomstick has no volition other than the dutiful execution of Mickey’s task. It was Mickey who failed to be sufficiently precise in how he formulated the task. Had Goethe’s broomstick been superintelligent, it seems to me that he might just have killed the apprentice and thrown his body in the tub. After all, 70 percent of it are water, so the task is completed with speed and elegance.

Illustration of Der Zauberlehrling. From: Goethe's Werke, 1882, drawing by Ferdinand Barth (1842--1892).

Illustration of Der Zauberlehrling. From: Goethe’s Werke, 1882, drawing by Ferdinand Barth (1842–1892).

These games of formalism may strike you as trite exercises in deliberately misunderstanding instructions against their intention. However, every programmer or lawmaker knows that this is exactly the reason for why software is so hard to write or laws are hard to formulate. If there is a way to misunderstand your instructions, it will happen.

Thus, there is no need to attribute malice to the superintelligence. Instead, we might be doomed even by a perfectly obedient agent labouring to perform ostensibly innocent tasks. In the words of Eliezer Yudkowsky, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” 1 Of course, a superintelligence with volition, intentionality, or free will, would be even harder to control.

The control problem has many aspects, one of which is purely institutional: If the superintelligence is developed in vicious competition between corporations or militaries, none of them is motivated to stop the development, for fear of losing a technological arms race. As a civilisation, we have an unimpressive track record of preventing globally harmful behaviour when individual proximate gains are at hand. This addresses a popular rejoinder to a dystopian intelligence explosion: “Why don’t we just pull the plug?” First, it’s not clear that there is a plug to pull.2 Second, who is “we”?

Assuming we can solve the institutional problem of who gets to design the the superintelligence, the solution seems to be to codify our goals and values, so that it won’t act in ways that we find undesirable. This, of course, is a problem as old as philosophy: What are our values? And again, who is “we”: the species, the individual, or our future? Should the paperclip maximiser make sure no children are killed? Should no unborn children aborted? Is it in the alcoholic’s interest to receive alcohol or not?

Bostrom speculates if we can “what is good for us” on a higher-order level, asking the superintelligence to extrapolate our desires based on a charitable (rather than literal) interpretation of our own flawed descriptions, possibly in a hierarchy of decisions deferred to ever higher moral and intellectual powers. Assuming that we were to agree on a meaningful “Three Laws of Robotics” that accurately described our values without risking perverse instantiation, how do we then motivate a vastly superiour intellect to follow these laws? I find these speculations both entertaining and thought-provoking.3

Some futurists welcome the idea of a paternalistic superintelligence that unburdens our stone-age mind from solving our own ethical questions. Even so, as Bostrom argues, if we share our environment with a superintelligence, our future depends on its decisions, much like the future of gorillas depends on human decisions. If we are currently building the superintelligence, we have but one chance to make it treat us as nice as we treat the gorillas.

The Library of Turing

Bostrom tacitly assumes a strictly functionalist view of the mind. For instance, if we were able to perfectly simulate the neurochemical workings of a brain, this simulation itself would be a mind with consciousness, volition, and agency. Once the functions of the brain are understood, the substrate on which their simulation runs is secondary. From this perspective, questions about cognition, intelligence, and even agency, are ultimately computational questions.

Let me invite you to a fictional place that I call the Library of Turing. It is inspired by La biblioteca de Babel, described in a short story by Jorge Luis Borges. His library consists of an enormous collection of all possible books of a certain format. Almost every book is gibberish, as had it been typed by intoxicated monkeys. One book consists entirely of the letter A, but another contains Hamlet.4 In Borges’ story, the librarians find themselves in a suicidal state of despair; surrounded by an ocean of knowledge, yet unable to navigate it due to the library’s overwhelming dimensions. Our brains cannot fathom its size, and our languages cannot describe it5, “unimaginably vast” does not even come close. However, the metaphor of the library helps us to build some kind of mental image of its size.6

Behold, then, the Library of Turing. Each volume contains computer code. Much of it is garbage, but some volumes contain meaningful programs in some concrete programming language, let’s say Lisp.7 We can feed these programs to a computer found in each room. Most wouldn’t do anything useful, many would cause the computer to crash. From our starting room, lined with bookcases, we see exits to other rooms in all directions. These rooms contain more volumes, more bookcases, and more exits. There is no discernible ordering in the library, but close to where we are standing, a particularly worn volume contains the program

        (print "Hello, world!")

which instructs the computer to output the friendly message “Hello world!”. Next to it, a book contains the nonsensical phrase “Etaoin shrdlu,” not a valid program in Lisp. We shrug and put it back. Thanks to the diligence of some algorithmic librarian, a third book in our room contains the complete code for the primitive natural language processor Eliza, written by Joseph Weizenbaum in 1966. This famous little program is able to pretend to have (typed) conversations with you, like this:

“Tell me your troubles.”

“My cat hates me.”

“What makes you believe cat hates you?”

The primitive grammatical mistake of leaving out the pronoun gives you a clue that Eliza works by very simple syntactic substitutions, without any attempt at understanding what you mean. Still, it’s better than just saying “Hello, world!” If we continue browsing, we might find the code of some natural language processors that are state-of-the-art in the 2010s, such as IBM’s Watson, or Apple’s Siri.


Fragments of the source code of Eliza, an early natural language processor. The whole program takes just over 1000 lines.

Fragments of the source code of Eliza, an early natural language processor. The whole program takes just over 1000 lines.

Indeed, our conceit is that every algorithm appears somewhere in Turing’s library. 8 In particular, if we accept functionalism, some book describes the behaviour of Einstein’s brain: On input “What is 5 + 5” or “Do you prefer Brahms?” or “Prove the Riemann hypothesis” it will give the same answer as Einstein. We have no idea how this program looks. Maybe it is based symbolic artificial intelligence, like Eliza. Or maybe the bulk of the books is taken up by a “connectome” of Einstein’s actual brain, i.e., a comprehensive map of its neural connections. Such a map has been known for C. elegans, a transparent roundworm with 302 neurones, since the mid-1980s, and even though Einstein’s brain is considerably larger, this is not absurd in principle.

Somewhere in Turing’s library there must be description of a computational process that is vastly superior to every human brain. Otherwise Einstein’s brain would have been the smartest algorithmic intelligence allowed by the laws of logic, or close to it. Yet it seems unlikely, not to say self-congratulatory, that evolution on the planet Earth succeeded in constructing the ultimate problem solving device in just a few million generations.

Translated into this setting, Bostrom suggests that our discovery of this description would have catastrophic consequences. The library contains a monster. Our exploration should proceed with prudence rather than ardour.

Searching for the Monster

Given that the description of a superintelligence exists somewhere in the library, all that remains is to find it. If you don’t work in computer science, this task seems to be relatively trivial. After all, even a mindless exhaustive search will find the volume in question. While this approach is not a particularly tempting exercise for a human, computers are supposed to be good at performing mindless, well-defined, repetitive, and routine tasks at high speed and without complaining. Thus, sooner or later we stumble upon the monster; the only debate is whether the search time is measured in years, generations, or geological time scales.

But this is a fallacy. It is a fallacy because of the unreasonable growth rate of the exponential function, and the powerlessness of exhaustive search.

To appreciate this, consider again the 23 symbols that make up our friendly “Hello World!” program. We could have found it, in principle, by exhaustively searching through all sequences of characters that make up Lisp programs.

How much time does this take? That is a simple exercise in combinatorics, after we fix some details—how many symbols are there, how fast is the computer, etc. But no matter how you fix these details, the result is disappointing.9 If a modern computer computer had started this calculation when the universe began, it would have gotten to somewhere around (print "Hello,"). You could gain a few extra letters by throwing millions of computers at the problem. But even then, the universe does not have enough resources to allow for an exhaustive search for even a very simple program. Either you run out of time before the universe ends, or you run out of protons to build computers with. Computation is a resource, exponential growth is immense, and the universe is finite.

Bostrom pays little attention to this problem. For instance, in his discussion of current efforts on brain simulation, he writes:

Success at emulating a tiny brain, such as that of C. elegans, would give us a better view of what it would take to emulate larger brains. At some point in the technology development process, once techniques are available for automatically emulating small quantities of brain tissue, the problem reduces to one of scaling.

Well, to me, scaling is the problem.

One might object that this observation is a mundane, quantitative argument that does not invalidate the compelling fact that in principle, the exhaustive search will sooner or later stumble upon the fully-fletched “Hello World!”-program, and eventually the monstrous superintelligence. But that is exactly my point: You can simultaneously accept machine-state functionalism and be blasé about the prospects of AI. Speculation about the imminent discovery of the monster in Turing’s Library must be grounded in computational thinking, which is all about the growth rates of computational resources.

Could there be another way of discovering the superintelligence than exhaustive search? Certainly. After all, nature has discovered one of the monsters, the brain of Homo sapiens, starting with very simple “brains” hundreds of millions of years ago, and refining them stepwise by natural selection and mutation in environments that favoured cognition. Thus, there is some gradient, some sensible set of stepwise refinements, that Nature has followed through the Library of Turing to arrive at Einstein’s brain. 10 We just don’t know how to recognise, nor efficiently follow this gradient. In the enthusiasm surrounding artificial intelligence in the 1960s we might have thought that we were on such a gradient. If so, we have abandoned it. The current trend in artificial intelligence, away from symbolic reasoning and towards statistical methods like machine learning, that do not aim to build cognitive models, strikes me as an unlikely pathway.

Given our current understanding of the limits of computation, there are many relatively innocent-looking computational problems that are computationally hard.11 After more than a generation of very serious research into these problems, nobody knows anything better than exhaustive search for many of them. In fact, while the amazing features of your smartphone may tell you a different story, the main conclusion of a generation of research in algorithms is bleak: for most computational problems we don’t know what to do.12

Let me repeat that I do not try to rule out the existence of a monster. I merely point out that even though a monster exists, we may never run into it. Turing’s Library is just too vast, nobody has labelled the shelves, much less put up useful signs and maps. Our intuition is mistaken when it extrapolates that we will soon have mapped the entire library, just because we’ve seen more of it than our grandparents ever did. Moreover, imminent and rampant progress in exhaustive search may have been a reasonable hope half a century ago. But today we know how hard it is.

Research Directions

If the algorithmic perspective above is correct, then there is nothing to worry about. The superintelligence is an entertaining fiction, no more worthy of our attention than an imminent invasion by aliens,13 eldritch gods from the oceans, or a sudden emergence of magic. The issues raised are inherently unscientific, and we might as well worry about imminent overpopulation on Venus or how many angels can dance on the head of a pin.

But I could be wrong. After all, I wager the future of human civilization on a presumption about the computational complexity of an ill-defined problem. I do think that I have good reasons for that, and that these reasons are more firmly grounded in our current understanding of computation than Bostrom’s extrapolations. However, if I’m wrong and Bostrom is right, then humanity will soon be dead, unless we join his research agenda. Thus, from the perspective of Pascal’s wager, it may seem prudent to entertain Bostrom’s concerns.

The problem domain opened up by Superintelligence may lead to valid research in various areas, independently of the plausibility of the underlying hypothesis.

For instance, the problem of codifying human values, a formalization of ethics, is a problem as old as philosophy. Bostrom’s prediction injects these questions with operational significance, as these codified ethics would be the framework for the incentive structure needed to solve the control problem. However, an “algorithmic code of ethics” is a worthwhile topic anyway. After all, our digital societies are becoming increasingly controlled by algorithmic processes. These processes are nowhere near superintelligent, but they are highly influential: Algorithms choose our news, decide our insurance premiums, and filter our friends and potential mates. Their consistency with human values is a valid question for ethicists and algorithmicists alike, no matter if the algorithms are authored by Google’s very real software engineers or a very hypothetical paternalistic AI.

Another issue related to the control problem is the ongoing work on the semantics and verification of programming languages. This has been a well-established subfield of computer science for a generation already. Some of the these issues go back to Gödel’s incompleteness theorem’s from the 1930s, proving that formal systems are unable to reason about the behaviour of formal systems. In particular, it has turned out to be a very difficult problem of logic to specify and verify the intended behaviour of even very short pieces of computer code. It may well be that questions about harnessing the behaviour of a superintelligence can be investigated using these tools. Other scientific areas may provide similar connection points: Incentive structures are a valid question for game theory and algorithmic mechanism design, which resides on the intersection of computer science and economics. Or the computational aspects of artificial intelligence may become sufficiently well defined to become a valid question for researchers in the field of computational complexity. This might all lead to fruitful research, even if the superintelligence remains eternally absent. But whether superintelligence becomes a valid topic for any of these disciplines is very unclear to me. It may depend on purely social issues within those research communities.

It may also depend on the availability of external funding.

This last point leads me to speculate about a very real potential impact of Bostrom’s book. It has to do with the perception of artificial intelligence in the public eye. Research ethics are a political question ultimately decided by the public. The most Herculean promise of current artificial intelligence research is the construction of general artificial intelligence. What if the public no longer saw that as a goal, but as a threat? If Bostrom’s dystopian perspective leads to increased public awareness about catastrophic effects of superintelligence, then general AI would not appear under “Goals” in a research plan, but under “Ethical considerations” that merit to public scrutiny along with climate change, sensitive medical data, rampant superviruses, animal testing, or nuclear disasters. And these, largely political, requirements may themselves necessitate research into the control problem.

We may want to ponder the consequences of running into a monster in the Library of Turing no matter how plausible that event may be.

  1. Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk, 2006.
  2. Unlike in the movies, we must assume that a superintelligence can anticipate the efforts of a plucky group of heroes. Bostrom describes a number of ways in which to make yourself immune to such an attack by distributing the computational substrate. Surely a superintelligence can think of more.
  3. I particularly like the idea of design an AI with a craving for cryptographic puzzles to which we know the answer.
  4. A million others contain Hamlet with exactly one misprint.
  5. The language of mathematics is an exception. But our brains cannot imagine 1025, the number of stars, much less 10123 the number of atoms in the universe. By contrast, the number of books in the Library of Babel is 251, 312, 000.
  6. Daniel Dennett has has used this idea as “the Library of Mendel” to illustrate the vast configuration space of genetic variation. Daniel C. Dennett, Darwin’s Dangerous Idea, Simon & Schuster 1995.
  7. I chose Lisp because of its role as an early AI programming language, but any other language would do. For a cleaner argument, I could have chosen the syntax of Alan Turing’s original “universal machine”. In the language of computability theory, all these choices are “Turing-complete”: they can simulate each other.
  8. Some programs may be so big as to be spread over several books, maybe referring to other volumes as subroutines much like modern software is arranged into so-called software libraries.
  9. Lisp is written in an alphabet of at most 128 characters. A modern computer with 109 operations per second, running since the Big Bang, would not have finished all 12813 strings of length 13.
  10. If we entertain this idea, then a pathway to intelligence would be the simulation of competetive environments in which to artificially evolve intelligence by mutation and selection. This replaces the problem from simulation of entire brains to simulation of entire adaptive environments, including the brains in them.
  11. This claim is formalised in various hypotheses in the theory of computational complexity, such as “P is not NP” and the Exponential Time Hypothesis.
  12. Not even functioning quantum computers would change this.
  13. Here is the analogy in full: We have explored the Moon and are about to explore Mars. We should worry about imminent contact with alien civilisations.

Written by thorehusfeldt

April 10, 2015 at 07:53

Posted in Exposition

Universal Computation Told in Quotes

leave a comment »

In the words of George Bernard Shaw,

I often quote myself. It adds spice to my conversation.

Selfie with Alan at Bletchley Park, 2012

Selfie with Alan at Bletchley Park, 2012

The other day, while preparing a lecture on the impact of Alan Turing’s work, I had to opportunity to come up with the following deepism:

Nothing in information technology makes sense except in the light of universal computation.

—Thore Husfeldt, 2015

My formulation, of course, deliberately mimics a meme of evolutionary biology,

Nothing in biology makes sense except in the light of evolution.

—Theodosius Dobzhansky

from the eponymous essay ([Wikipedia]).

The claim I want to establish is that universal computation is to information technology what evolution is to biology: the single fundamental conceptual breakthrough that makes everything clear, and without which many phenomena become impossible to reason about.

The story about Turing’s result goes something like this: In the Olden Days, computation was domain-specific. Some of it was very sophisticated, and by the 19th century even programmable, such as the Jacquard loom or Babbage’s Analytical engine. But the insight of universal computation, formulated in Turing’s 1936 paper, comes only in section 6, mainly to give credence the validity of the computational model he just defined

It is possible to invent a single machine which can be used to compute any computable sequence.

A. M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proc. London Math. Soc., 1936

To do this, Turing observes that the behaviour of a specific computational device (“the hardware”) can be turned into an algorithmic description and then fed as input to sufficiently strong other machine. By this conflation of “device”, “algorithm” and “data”, all computational models, provided they reach some minimal level of algorithmic power, are equally powerful (in a qualitative sense) in that they can simulate each other. The device operating Jaquard’s loom and Babbage’s analytical engine might as well be one and the same, namely the device we now call the universal computer. Bam!

In the Days of Yore, devices were appliances like washing machines and lawn mowers. They did one thing really well. But in the information age, there is only Turing’s universal computer

Like every conceptual breakthrough, once you’ve grasped it, it becomes mundane and trivial. Today, children have universal computers in their pockets (a smartphone), and the idea that of course this device is both a music player and a word processor and a calculator and a game is obvious.

To appreciate the change of mindset, consider what computer pioneer Howard Aiken said as late as 1956, two decades after Turing’s paper:

If it should turn out that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence that I have ever encountered.

Yet this is exactly how it turned out!

A splendid book that makes this point very strongly is Martin Davis’s Universal computation: The road from Leibniz to Turing. Highly recommended.

This is a good story, and I like telling it. In this framing, the contribution of Alan Turing stands next to that of Charles Darwin.

So there I was, eagerly preparing slides and rehearsing the timing of anecdotes for yet another public lecture about this – in fact, for a pre-premiere of the Turing drama The Imitation Game.

Alas, in the middle of my preparations, a trusted colleague pointed me to a paper by Copeland: B. Jack Copeland, “Unfair to Aiken”, IEEE Annals of the History of Computing, vol.26, no. 4, pp. 35-37, October-December 2004, doi:10.1109/MAHC.2004.36. It turns out that the Aiken quote above is taken grossly out of context—Aiken understood universal computation perfectly well and merely made a completely valid point about hardware optimisation!

Thus was a good story ruined for me.

So where to turn instead to find a solid demonstration of computational ignorance? Luckily, I had to look to further than in the news:

[W]e want to allow a means of communication between two people which even in extemis with a signed warrant from the home secretary personally that we cannot read? … My answer to that question is no, we must not.

David Cameron, January 2015

Thank you, David!

The problem with this statement, apart from the galling infringement of fundamental rights and a number of other points, is the technological incompetence: Encryption is an algorithmic process that requires the availability of merely some very basic computational fundamentals. Outlawing encryption is computationally equivalent to outlawing multiplication. Today we don’t have an appliance for e-mail (which Cameron could then, presumably, control from the government). Instead, we have a universal computer that does amazing things, including encrypted email.

Cory Doctorow phrased the question underlying the desires of people like David Cameron in a blog post a few years ago:

“Can’t you just make us a general-purpose computer that runs all the programs, except the ones that scare and anger us? Can’t you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?”

Cory Doctorow, Lockdown: The coming war on general-purpose computing, 2011.

Well, no you can’t. Because Turing.

This angle fit my lecture perfectly, of course: The Imitation Game is a film about encryption and the struggle against totalitarianism and the fact that your own government may not have your own best interests at heart, so I was able to replace the slanderous Aiken quote with a current events angle that made Turing even more relevant!

Written by thorehusfeldt

January 25, 2015 at 18:04

Universal Computation

leave a comment »

Nothing in information technology makes sense except in the light of universal computation.

Written by thorehusfeldt

January 23, 2015 at 14:28

Posted in Exposition

Personal reality (Gödel Prize 2014)

leave a comment »

This is a quick English translation of a draft I wrote about the Threshold Algorithm of Fagin, Lotem, and Naor, which received the Gödel Prize at ICALP 2014. I’ve run this through Google translate and ironed out the worst grammatical quirks by hand, but the result is far from idiomatic English. My apologies. A slightly abridged version of the Danish text was published in the Danish weekly broadsheet Weekendavisen 2014 #27 from July 4th 2014, in the Science section Idéer, p. 12.

Algorithms. On July 9, the Gödel prize in theoretical computer science at a conference in Copenhagen. This year, the award goes to an algorithm facilitating personalized information retrieval. The existence of this kind of efficient algorithms determines our perception of digital reality.


Information technology has led to two quite radical changes in the way we access information. In the “old days” of the mid-nineties, information was organized like a library. The web service Yahoo! had categorized the web’s pages by topic. If you wanted to find anything about Paris, you had to navigate through a hierarchy—Geography, Europe, France, cities, Paris—just like finding the right shelf in the public library. Google’s page ranking algorithm PageRank from 1998 was a fundamental rethinking of this centuries-old tradition. Now you could write “Paris” into a search field, and immediately got a list of all web pages about Paris, ranked by quality—a little like walking up to the librarian, muttering a single word, and let her do all the lookup for you. Categories and hierarchies were gone! Google’s algorithm was so fundamental that today it seems inconceivable to search the web in any other way than by ranked text search.

The next change came in the late noughties: personalized search results. The term “Paris Hilton” means something completely different to Jürgen, a German tourist in a European city than to Ashley, an American teenager. Therefore, search engines like Google try to adjust the ranked list of answers to the individual user, based on the digital traces we have left. The user’s age and geographical residence is enough to recommend the right hotel for Jürgen and the right celebrity Ashley. Google’s vision is to answer the question before the user has made it. To keep with the library metaphor, this is equivalent to walking up to the librarian and get stuck a book in your hand before you open your mouth.

Personalization is the information reality we live in right now. Google personalises web searches, Amazon personalises Amazon book recommendations, Netflix personalises movies, and Spotify personalises music. Dating services have always been personalised and the large web-based dating sites have obviously jumped the personalisation bandwagon as well. Personalization has many interesting potential consequences, such as the creation of so-called potential filter bubbles in information dissemination, which WA has covered earlier [It’s all about you, WA 20, 2011].

The algorithms involved in making personalisation work have a direct and significant impact on how we perceive the world—for better or worse. One of them, the Threshold Algorithm for top-k queries by Ron Fagin, Amnon Lotem, and Moni Naor will receive theoretical Computer Science’s most distinguished honour, the Gödel Prize, at a conference in Copenhagen.

While algorithms take a lot of choices for us and know more and more about us, the public knows precious little about algorithms. However, an algorithm is neither artificial intelligence, black magic, or “what a programmer says when he does not want to explain what happens.” Algorithms are problem-solving methods, expressed in terms so precise that it can be performed by a computer—which you can think of as an infinitely patient, obedient, and annoyingly literal-minded twelve-year-old.

Before the personalisation wave from 2010, a search service could content itself with sorting information by just one parameter. For example, a bestseller list ranks books by how many books were sold. But personalisation opens the possibility to rank and categorize books by price, age, number of pages, language, genre, etc. If a book was written by a Danish author may be very relevant to Mrs Hansen is totally irrelevant to Mrs Jensen. And Mrs Hansen may hate crime novels while Mrs Jensen loves them. Personalisation is therefore implemented as a weighting of the product’s many parameters, with the weighting tailored to Mrs Jensen.

But how do we now keep our books when each user has his or her own view of what is relevant? In principle, this is very easy to solve: Run through all books, calculate Mrs Jensen’s score for each of them, sort the results by score, and present her with the top 20 results.

Unfortunately, this takes too long.

This bears repeating, because it is contrary to our intuition: Sorting takes too long.

How can this be? Aren’t computers becoming faster and faster? Yes, they are. Technological developments increased processor speeds dramatically since Princeton mathematician John von Neumann’s ENIAC from the end of World War II, which weighed 30 tons and consisted of 17468 vacuum tubes. But at the same time, data volume has also grown bigger and bigger. In fact, computer memory and problem size have grown even faster than the processor speed. You could say that over time, computers have gotten slower and slower compared to the computational problems we expose them to. As a rule of thumb, “it takes a computer a second to touch all data” has been true then and now. Thus, the algorithmic challenge is an aspect big data: we barely have time to answer even the simplest questions on very large data volumes.

Of course, one might say that a second may not be such a long time anyway. Mrs Jensen would reasonably be expected to wait a second until we have reorganised the entire library according to her preferences. But the problem is that our hypothetical web service hopes to have more customers than just Mrs Jensen; we would like to serve more than one user per second. In fact, we hope for a few million per second. And we would not want to invest in a few million computers to serve these customers.

On the other hand, we need to perform the calculation immediately, because there are too many different customers for us to prepare a personalised copy of the entire library for each of them. That approach would require a copy of the entire data set for each user, and we don’t have that much storage space.

In summary, the algorithmic problem occurs because the data set is too large to sort it straight away, and there are too many customers to sort everything in advance.

Without such an algorithmic solution to this problem there would be no personalisation. In fact there are many computational problems, where exactly this occurs: algorithms research has failed to find a clever solution. Certain desirable algorithms simply do not exist. Sometimes, this fact of nonexistence can also be proved.
The Austrian logician Kurt Gödel’s own research in the 1930s, long before Computer Science was its own discipline, focused precisely on stringent mathematical arguments for the absence of algorithms for specific problems.

But it turns out that four our problem at hand, an efficient algorithm does, called the Threshold Algorithm. It is short and elegant, and you try to come up with it yourself of google it. The elegant solution requires an data structure consisting of the entire database sorted for each parameter individually, as well as some additional bookkeeping and cross-references between the lists. In these sorted lists a few steps suffice to find the answer. The algorithm is easy to follow and easy to implement as program code. The analysis of correctness and complexity requires a little more computer science background.

Theoretical computer science is a mathematical discipline, so the algorithm is published along with a rigorous analysis of efficiency and a formal proof of optimality. The Threshold Algorithm was discovered and published in 2001 by three Israeli and American researchers.
The Gödel Prize is an award in theoretical computer science, so it has probably been decisive that the mathematical aspects of the outcome meets the aesthetic requirements of Mathematics for simplicity. The Threshold Algorithm can be described in 10 lines, and the authors were originally worried about getting the manuscript rejected because it looked too easy. This did not happen. The article was immediately admitted to the ACM Symposium on Principles of Database Systems in 2001, won the best paper award, won the symposium’s Test-of-Time Award 10 years later and now the Gödel Prize.

Many algorithms in the computer science literature are pure basic research and never find their way to a product, but the Threshold Algorithm is a good example of one of the now many pieces of theoretical computer science that places a direct influence on our reality, as reality becomes more and more algorithmic. They choose our next book, our next news and perhaps our next relationship.

Gödelprisen given annually since 1993 for outstanding achievements in theoretical computer science. The award ceremony will take place alternately between North America and Europe.

Fagin, Naor, Lotem, Optimal aggregation algorithms for middleware, J. Comput. Syst. Sc. 66 (2003) 614-656. [PDF]

Written by thorehusfeldt

July 17, 2014 at 09:47

Posted in Exposition

Den personlige virkelighed

leave a comment »

En lettere forkortet udgave af denne tekst er publiceret i Weekendavisen 2014 #27 fra d. 4. juli 2014 i sektionen Idéer, s. 12.

Algoritmer. Den 9. juli uddeles Gödelprisen for teoretisk datalogi ved en konference i København. I år går prisen til et en algoritme som blandt andet muliggør personaliseret informationssøgning. Eksistensen af den slags effektive algoritmer bestemmer, hvordan den digitale virkelighed ser ud for os.


Informationsteknologien har medført to helt gennemgribende ændringer i vores måde at tilgå information. I »gamle dage« i midten af halvfemserne var information organiseret som et bibliotek. Tjenesten Yahoo! havde kategoriseret webbens sider efter emne. Ville man finde noget om Paris, så var man nødt til at klikke sig igennem et hierarki – geografi, Europa, Frankrig, byer, Paris – ganske som at finde den rette hylde i kommunebiblioteket. Googles siderangeringsalgoritme »Pagerank« fra 1998 indebar en fundamental nytænkning af denne århundreder gamle tradition. Nu kunne man skrive »Paris« ind i et søgefelt, og straks fik vi en liste med alle websider om Paris, sorteret efter kvalitet – lidt som at gå op til bibliotekaren, mumle et enkelt ord, og lade hende gøre alt opslagsarbejdet. Kategorierne var borte og Googles algoritme blev så fundamental, at det i dag for det fleste virker utænkeligt at gennemsøge webben på nogen anden måde end ved rangeret fritekstsøgning.

Den næste ændring kom først i slutningen af nullerne: personaliserede søgeresultateter. Opslaget »Hilton Paris« betyder nemlig noget helt andet for Jürgen, en tysk turist i en europæisk storby, end for Ashley, en amerikansk teenager. Derfor forsøger søgemaskiner som Google at tilpasse den sorterede liste af svar til den individuelle bruger, baseret på de digitale spor, vi har efterladt. Brugerens alder og geografisk opholdssted er nok til at anbefale det rigtige hotel til Jürgen og den rigtige kendis til Ashley. Googles vision er at besvare spørgsmålet inden brugeren har stillet det. For at blive i biblioteksmetaforen svarer det til at gå op til bibliotekaren og få stukket en bog i hånden, før man åbner munden.

Personalisering er den informationvirkelighed, vi lever i lige nu. Google personalerer websøgninger, Amazon personaliserer boganbefalinger, Netflix film og Spotify musik. Partnerformidling har altid været personaliseret, og de store elektroniske dating sites er også selvfølgeligt med på den vogn. Personalisering har en hel række interessante potentielle konsekvenser, fx skabelsen af såkaldte potentielle Filterbobler i informationsformidling, som WA tidligere har belyst [Det hele handler om dig, WA 20, 2011].

De algoritmer, der indgår i at få personalisering til at virke, har altså direkte og mærkbare konsekvenser for, hvordan vi opfatter verden – på godt og ondt. En af dem, tærskelalgoritmen for top-k-søgninger af Ron Fagin, Amnon Lotem og Moni Naor modtager i disse dage den teoretiske datalogis fornemste udmærkelse, Gödelprisen, ved en konference i København.

Samtidigt med at algoritmerne træffer en masse valg for os og véd mere og mere om os, véd vi temmelig lidt om algoritmer. En algoritme er nu hverken kunstig intelligens, sort magi eller »hvad en programmør siger, når han ikke har lyst til at forklare, hvad der sker.« Algoritmer er problemløsningsmetoder, udtrykt så præcise, at det kan udføres af en computer – som man kan tænke på som en uendelig tålmodig, lydig og irriterende ordret tolvårig.

Inden personaliseringsbølgen fra 2010 kunne en søgetjeneste nøjes med at sortere information efter bare én parameter. En bestsellerliste kunne fx rangordne bøger efter mest solgte. Men personaliseringen åbner muligheden at rangere og kategorisere bøger efter pris, alder, antal sider, sprog, genre, osv.
Om en bestemt bog er skrevet af en dansk forfatter kan være meget relevant for fru Hansen men helt irrelevant for fru Jensen. Og fru Jensen hader måske krimier mens fru Jensen elsker dem. Personalisering er altså implementeret som en vægtning af produktets mange parametre, og denne vægtning er skræddersyet til netop fru Jensen.

Men hvordan holder nu vores bøger sorteret, når hver bruger har sin egen opfattelse af, hvad der er relevant? I princippet et det meget nemt at løse: Gennemløb alle bogtitler, beregn fru Jensens score for hver af dem, sortér resultatet efter score, og præsentér de 20 bedste resultater for hende.

Desværre tager det for lang tid.

Det skal man måske lige gentage, fordi det strider imod vores intuition: Det tager for lang tid at sortere.

Hvordan kan det være? Bliver datamater ikke hurtigere og hurtigere? Jo, det gør de skam. Den teknologiske udvikling har selvfølgeligt gjort processorhastigheder meget hurtigere siden Princeton-matematikeren John von Neumanns »ENIAC« fra slutningen af Anden Verdenskrig, som vejede 30 tons og bestod af 17468 radiorør. Men samtidigt er datamængder også blevet større og større. Maskinens hukommelse og problemernes størrelse er vokset endnu hurtigere en processorhastigheden. Vil man sætte det på en spids kan man sige, at computere bliver langsommere og langsommere i forhold til de beregningsproblemer, vi udsætter dem for. Men som en tommelfingerregel gælder “det tager en computer et sekund at berøre hele datamængden” før som nu. Den udfordring er altså et aspekt ved det vil kalder massedata, big data: at vi knapt nok har tid til at besvare endog de mest enkle spørgsmål til meget store datamængder.

Nu kunne man måske sige, at et sekund måske ikke er så voldsomt lang tid alligevel. Fru Jensen vil nok godt kunne vente, til vi har omorganiseret hele bibioteket, så det passer til netop hendes præferencer. Problemet er bare, at vi også håber, at der er flere kunder i butikken end fru Jensen; vi vil gerne servicere mere en én bruger per sekund. Helst et par millioner per sekund. Og vi vil nødigt investere i et par millioner computere til at betjene disse kunder.

Vi skal gennemføre beregning her og nu, fordi der er for mange forskellige kunder til at vi kan forberede en kopi af hele biblioteket til hver af dem. Det skulle tage en kopi af hele datamængden for hver bruger, og så meget lagerplads har vi ikke.

Det algoritmiske problem opstår altså fordi vi skal håndtere for store datamængder til at sortere alting på stående fod og for mange kunder til at sortere alting på forhånd.

Uden sådan en algoritmisk løsning til dette problem ingen personalisering. Der findes forresten rigtig mange beregningsproblemer, hvor vi er mindre heldige, og hvor det ikke er lykkedes algoritmeforskningen at finde en snedig løsning. Visse ønskværdige algoritmer findes ganske enkelt ikke. Det kan man så også bevise.
Den østrigske logikers Gödels egen forskning i 1930erne, længe inden datalogi var blevet en egen disciplin, drejede sig netop om a bevise fraværet af algoritmer for bestemte problemer.

Men det viser sig altså, at en sådan algoritme faktisk findes, nemlig tærskelalgoritmen. Den er kort og elegant, og man kan more sig med selv at komme på den eller google efter svaret på nettet. Den elegante løsning kræver en sindrig datastruktur, hvor man i forvejen har sorteret hele databasen for hver enkelt parameter for sig, samt nogle ekstra indholdsfortegnelser med krydshenvisninger mellem listerne. I disse sorterede lister kan man så nøjes med ganske få opslag. Selve algoritmen er nem at følge og let at implementere som programkode, hvorimod analysen for korrekthed og effektivitet kræver lidt mere datalogisk baggrundsviden.

Teoretisk datalogi er en matematisk disciplin, så algoritmen som forskningsresultat publiceres sammen med en stringent analyse af dens effektivitet og et formelt bevis af dens optimalitet. Tærskelalgoritmen blev opdaget og publiceret i 2001 af tre israelske og amerikanske forskere, som derfor modtager Gödelprisen i år.
Gödelprisen er en udmærkelse inden for teoretisk datalogi, så det har sikkert også været udslagsgivende, at de matematiske aspekter af resultatet opfylder matematikkens æstetiske krav til enkelhed. Selve tærskelalgoritmen kan beskrives på 10 linjer, og forfatterne bag resultatet var i sin tid alvorligt bekymrede for at få manuskriptet afvist, fordi det så for let ud. Sådan gik det dog ikke. Artiklen blev dog straks optaget på ACM Symposium on Principles of Database Systems i 2001, vandt prisen for bedste artikel, vandt symposiets Test-of-Time Award 10 år senere og får nu altså Gödelprisen.

Mange algoritmer i den datalogiske literatur er ren grundforskning og finder aldrig vej til et produkt, men tærskelalgoritmen er et godt eksempel på et af de efterhånden mange stykker teoretisk datalogi som sætter et umiddelbart præg på vores virkelighed, efterhånden som den virkelighed bliver mere og mere algoritmisk. Den vælger vores næste bog, vores næste nyhed og måske vores næste parforhold.

Gödelprisen uddeles årligt siden 1993 for fremragende resultater i teoretisk datalogi. Prisoverrækkelsen finder sted på skift mellem Nordamerika og Europa.

Fagin, Naor, Lotem, Optimal aggregation algorithms for middleware, J. Comput. Syst. Sc. 66 (2003) 614–656. PDF


This is a Danish popular science introduction to the threshold algorithm of Fagin, Lotem, and Naor, which received the Gödel prize at ICALP 2014.
A shortened form of this manuscript appeared in the Danish weekly broadsheet Weekendavisen.

Written by thorehusfeldt

July 5, 2014 at 10:38

Posted in Danish, Exposition

Turing test joke

leave a comment »


Alan Turing gets pulled over by a police officer.

“Hi. I’m Alan. How may I help you?”

“Do you have any idea how fast you were going?”

“We are discussing you, not me.”

Written by thorehusfeldt

June 10, 2014 at 13:38

Posted in Exposition, Silly


leave a comment »

This is a draft of a description of the field algorithms I was asked to do for the Swedish Research Council (Vetenskapsrådet). The section structure is given. All kinds of feedback (in particular from native Swedish speakers about my clunky grammar) are welcome.


Beskrivning av forskning inom ämnet

Studiet av algoritmer är en central del av datavetenskapen som behandlar konstruktion och analys av metoder för resursmedveten beräkning. I föreliggande beskrivning omfattar ämnet även stora delar av beräkningsteorin, inklusive teorin för beräkningskomplexitet.

De mest väletablerade delarna av ämnet, konstruktion och analys av algoritmer samt beräkningskomplexitet är av matematisk natur och har starka kopplingar till olika matematiska discipliner som grafteori, kombinatorik, logik och sannolikhetsteori. Det huvudsakliga målet är att systematiskt studera beräkning med syftet att etablera allmängiltiga utsagor om grundläggande beräkningsproblem i olika beräkningsmodeller. Bidraget till praktiska beräkningar är i form av den allmänna förståelsen för beräkn­ingsproblem samt utvecklandet av nya metoder för att lösa dessa.

Mer tillämpad forskning inom ämnet fokuserar på konstruktion och analys av algoritmer från områden både inom och utanför datavetenskapen, t ex biologi, ekonomi, signalbehandling, och WWW. Forskningen kring dessa algoritmer innehåller även empiriska studier av deras beteende, ingenjörsaspekter av deras utveckling och återkoppling till modelleringsaspekter i användningsområdet. Med ökad algoritmisering av i princip alla datadrivna vetenskaper är potentialen för synergier med andra discipliner enorm.

Alla delar av algoritmisk forskning påverkas av nya beräknings- och datamodeller som motiveras av aktuella utvecklingar inom informationteknologin, t ex massiva datamängder, parallella och distribuerade arkitekturer, strömmad data, energieffektiva beräkningar, säkerhets- och integritetsaspekter, och experimentella teknologier som kvantdatorer.

Styrkor och svagheter

Trots att datavetenskapen bara är drygt 50 år gammal, så har många algoritmiska resultat spridits till andra vetenskaper och undervisas i grundläggande kurser. Områdets styrka är ett starkt matematisk fundament, en tradition för att betrakta problem på många abstraktionsnivåer och teknologisk lyhördhet och samutveckling med informationsteknologin. Internationellt finns det ett mycket starkt utbyte både med andra vetenskaper och programvaruindustrin.

Vill man peka på områdets svagheter, så skapar en del av teoribildningen – i likhet med annan grundforskning och tillämpad matematik – ibland rent internt motiverade frågeställningar som visar sig vara ofruktbara. Vissa resultat är ej direkt applicerbara på de beräkningsproblem som uppstår i olika tillämpningar. En annan utmaning är den starka ställningen som analysen av algoritmers värstafallsbeteende har – denna dominerar områdets kvalitetskriterier på bekostnad av indatamodeller som är svårare att definiera stringent, men som i några tillämpningar vore mera relevanta. Behovet av matematisk stringens och teknologisk flexibilitet leder också till att profilen för kandidater till forskning i algoritmik är attraktiv för många andra riktningar både inom och utanför akademin, vilket skapar ett visst rekryteringsproblem.

Specifikt svenska svagheter i området behandlas nedan.

Trender, utvecklingstendenser och utvecklingspotential

Den grundläggande forskningen inom teoretisk datavetenskap, såsom algoritmteori, är bland de äldsta ämnen i datavetenskapen och har mognats till en matematisk disciplin med rik teoribildning och starka resultat. Det traditionella syftet är att skapa, analysera, och resonera kring algoritmer för problem på diskreta strukturer med avseende på stringenta värstafallsgarantier för algoritmers asymptotiska resursförbrukning och lösningskvalitet som funktion av problemstorleken. Man kan idag påstå att de allra flesta naturliga beräkningsproblemen är, grovt sett, klassificerade med avseende på beräknings­komplexitet. Pågående forskning har flyttat fokus till mer specialiserade problem och sofistikerade frågeställningar, som ofta är inomvetenskapligt motiverade.

Algoritmisering och datorisering

Algoritmteorin utmanas ständigt av nya beräkningsmodeller motiverad av aktuell informationsteknologi och nya frågeställningar från andra vetenskaper och samhället. Ett numera klassiskt exempel är beräkningsbiologin, som idag är ett separat forskningsområde med egna utbildningar, publikationsfora och anslag, men som fortfarande utgör en rik källa till algoritmiska frågeställningar; algoritmik bidrar både till effektiv problemlösning t. ex. av olika proteinviknings- eller sekventieringsproblem, men också till modellering. Till exempel är synen på evolution under den algoritmiska linsen att DNA är en algoritm som exekveras av proteiner. Härvid betonas fenomenets process- och informationsaspekter, vilket supplerar t ex elektrokemiska eller zoologiska perspektiv på evolution.

Ett exempel på en teknologisk påverkan är den pågående utvecklingen inom minnesteknologi och sensorer, där mängden av data som insamlas och behöver analyseras överstiger utvecklingen i beräkningskraft. Man kan säga att moderna datorer bliver allt långsammare i relation till sina arbetsuppgifter. Behovet för och effekten av algoritmer för analys av massiva datamängder är nu för tiden enorm, inte bara i vetenskapliga tillämpningar, men också i samhället, där algoritmer används till att reglera grundläggande demokratiska mekanismer som tillgång till information. Detta skapar stort behov av stringenta analyser och nya lösningar.

Det algoritmiska perspektivet fortsätter att ha konceptuell påverkan på traditionellt icke-datalogiska discipliner, när dessa utvecklar modeller som tar hänsyn till processers effektivitet och resursförbrukning. Ett exempel är statsvetenskap och ekonomi, där algoritmisk spelteori, algoritmisk mekanismkonstruktion och computational social choice är aktuella ämnen som uppfattar ekonomins individbegrepp som resursbegränsade agenter vars val inte bara är individuellt optimala men även skall kunna fattas av en effektiv process. Dylika utvecklingar har åtnjutit stor internationell uppmärksamhet och har rik potential för tvärvetenskaplig korsbefruktning.

Några trender

Några konkreta aktuella trender inom algoritmforskning med stor internationell uppmärksamhet är:

  • Nya tekniker och modeller för konstruktion och analys av algoritmer utöver deterministisk och sekventiell beräkning: randomiserad, algebraisk, approximativ, exakt, on-line, strömmande, energisnål, parallell, fördelad, olika minnesarkiteturer, olika programspråksparadigm såsom constraint eller map–reduce, alternativa beräkningsmodeller som kvantdatorer.
  • Tillämpad algoritmik inom andra delar av datavetenskapen och relaterade tekniska vetenskaper: artificiell intelligens, programspråksteori, databaser, datorsyn och -grafik, robotik, sociala nätverk, sensornätverk, signalbehandling, etc.
    Ramverk för empirisk analys af algoritmer.
  • Algoritmiska metoder och modeller i andra, icke-datalogiska discipliner: algoritmisk spelteori (ekonomi), sociala nätverk (sociologi), computational finance, datoriserade bevis i matematik, beräkningskemi, etc.
  • Algoritmiska aspekter av datasäkerhet: främst kryptologi inklusive protokoll och beräkningskomplexitet, sekretess, integritet, säkerhet, autentisering, etc.
  • Nya modeller och tekniker för beräknings- och kommunikationskomplexitet, avrandomisering, undre gränser, icke-appproximerbarhet, kvantitativ komplexitet av svåra problem, parameteriserad komplexitet och andra flervariata analysmått.
  • Datastrukturer och algoritmer för analys och behandling av massiva datamängder, maskininläring, dataanalys och klassificering av ostrukturerad data.
    Konstruktion och analys av algebraiska, diskreta, och symboliska algoritmer för vetenskapliga beräkningar.
  • Stringent analys av optimeringsalgoritmer in linjär och icke-linjär programmering under olika fördelningar av indata.
  • Algoritmer och samhället: analys och metoder för informationssäkerhet, transparens.

Att stora delar av natur- och teknikvetenskaparna just nu upplevar en ökad algoritmisering av både tekniker och tankesätt betonar behovet för både algoritmisk kompetens på den icke-datalogiska sidan och ökad domänkunskap på den datalogiska sidan.

Ämnets ställning i Sverige och internationellt


Ur internationellt akademiskt perspektiv står forskning om algoritmer som ett centralt och stort ämne inom datavetenskap. Till exempel är Algorithms and Theory det datavetenskapliga ämnet med flest professorer vid amerikanska toppuniversitet. Även de teoretiska aspekterna är utomordentligt välfinansierade; till exempel grundades Simons Institute for the Theory of Computing vid UC Berkeley i år 2012 med ett anslag på 60 miljoner USD.

Algoritmiska kompetenser uppfattas numera som nödvändiga för många andra discipliner. Till exempel läser mer än var fjärde av samtliga studenter på Princeton University grundkursen i algoritmer och datastrukturer. Algoritmiskt tänkande är numera ett grundskoleämne i Storbritannien. På arbetsmarknaden är algoritmisk kompetens starkt efterfrågad av attraktiva programvaruföretag som Google.

I Sverige

Undervisning inom algoritmer och datastrukturer uppfattas som central datalogisk verksamhet på ett flertal universitet och högskolor och ingår i grundutbildningen på många datavetenskapliga och -tekniska program. Däremot är kurser i beräkningskomplexitet och -teori mer sällsynta och gruppen av svenska studenter utanför datavetenskapliga och -tekniska program i som uppnår algoritmisk kompetens är försumbar.

Nivån för svensk forskning inom ämnet är av internationell toppklass, men storleken av de flesta forskningsgrupperna i Sverige inom är under kritisk nivå.

Studiet av effektiva algoritmer har i Sverige inte utvecklats på ett så positivt som man kunde hoppats och en förväntad expansion har inte kommit till stånd. Kontrasten är extra oroande när man jämför med utlandet. Om området ej uppnår kritisk nivå kan detta ha negativa effekter på kunskapsnivån i landet. Det finns högst ett par starka grupper inom algoritmforskning och kompetensen är otillräcklig för att ens på ett adekvat sätt klara av behovet av avancerade kurser. Beräkningskomplexitet är än mer glest representerat. Denna negativa utveckling påverkar både den datavetenskapliga forskningen och andra datadrivna vetenskaper som upplever ett starkt behov av algoritmisk kompetens samt stora delar av innovativ programvaruproduktion, inte minst i relation till stora datamängder; aktuella modeord är Big Data och Cloud Computing).

Särskilda behov av forskningsinfrastruktur

Forskning inom algoritmer skapar inte självt något behov av forskningsinfrastruktur utöver de beräknings- och datalagringsresurser som krävs av ett eventuellt tillämpningsområde. Några utmaningar som datadrivna vetenskaper kommer att möta i samband med analys och förvaltning av data (skalbarhet, tillgänglighet, effektivitet, visualisering, integritet, tolkning, etc.) adresseras dock av just algoritmforskningen varigenom det kan uppstå behov för realistisk infrastruktur för validering och simulering av sådana lösningar.

Written by thorehusfeldt

June 5, 2014 at 15:17

Posted in Exposition