The Root of Borscht

Andrei Vazhnov's blog

This Blog Has Moved

I have permanently moved the blog to http://andreivazhnov.net/english/, and you can find these and all the new posts there.

Thank you so much for visiting!

Andrei










Written by Andrei Vazhnov

March 16, 2013 at 7:05 pm

Posted in Uncategorized

The Hundred Year Prophecy


“I draw the conclusion that … the economic problem may be solved, or be at least within sight of solution, within a hundred years. This means that the economic problem is not-if we look into the future-the permanent problem of the human race.”

— John Maynard Keynes, 1930



Contents:
The Fractal Snowflake of Technological Leverage
The Great Hope Of The 20th Century
The Magical Machine Has Already Been Invented
Vision 2030: Keynes or Fourastié?

Knowing of my interest in automated manufacturing, a friend sent me this video of a professor who is developing a 3D printer that will be able to print entire houses. It is an inspiring presentation since one day this technology will solve the housing problem that is very real and urgent for more than a billion people who live in squalid conditions deprived of even basic necessities. Furthermore, construction work is a leading cause of grave job-related injuries, and thus the house-printing machine will have an additional benefit of eliminating these types of accidents.

Nevertheless, towards the end of the video, the professor acknowledges that the construction industry is a principal source of employment all over the world, and that it will be necessary to find alternatives for the people displaced by this technology. What are the possible options? It is not entirely clear. At one point, he says that perhaps some of those who used to work in construction industry will now work in designing the houses, but that is a bit like saying that musicians displaced by the invention of sound recording should dedicate themselves to producing and selling records. Without a doubt, it will absorb some of the newly benched workforce, but the market needs many fewer recording artists than live musicians. In the old days, if you wanted to listen to music at home, you had to hire someone to play it for you. But nowadays why would one hire a local musician when they can have the best in the world reproduced by a machine? And, as logic would have it, the vast majority of musician jobs went away for good when the recording technology became commonplace.

Thinking about all this made me recall that over the last year, almost every time I talked to people about the promise of 3D printers and precision robotics, someone would inevitably ask, “And when all of this really comes to pass, when 3D printers and robots will be able to make practically anything, what will happen to the 30%-50% of the population that is now employed in assembly lines and other types of manufacture?” I usually give a standard response that goes something like this, “200 years ago more than 80% of the population worked in agriculture, but nowadays nobody really misses the manual agricultural work; we will always find new types of work to do.” — and thus far it’s been undoubtedly true — it’s a lot more fun to be a lawyer, an engineer, an accountant, a programmer than to harvest potatoes by hand. But is there a limit to this process of inventing new things to do when our machines liberate us from the work of our ancestors?



The Snowflake of Technological Leverage

In the year 1930, the noted economist John Maynard Keynes wrote an eloquent essay on this subject called “The Economic Possibilities For Our Grandchildren.” In it, Keynes predicts that in 100 years, humanity will fully solve its “economic problem” (i.e. having to expend effort in order to produce goods and services), but he also warns that no country is ready for this new world and that we will face a series of grave crises before the society will adjust to the new reality. Technology will bring us paradise, but the path towards will not be easy.

The fundamental issue is that we are living in a world where the technology can multiply the effort of a person or an organization by 1000x or 1000000x and we are already beginning to observe tremendous bifurcation between those who are inside the centers of creativity and technology that enjoy this leverage and those who are outside. This pattern replays itself on multiple scales — not just between countries, but also between regions of a country, and even between neighborhoods of the same city.

To take just one example, here is an article in the Atlantic that talks about how technology centers such as Palo Alto and Redmond are living through some of their best years while rust belt cities like Detroit are going through some of the toughest times in their modern history. Most of the industrial jobs that disappeared in the Great Recession never returned; nor will they since structurally the economy does not need this type of manufacturing labor. To recover economically, many cities are betting on technology as a cure of the unemployment ills, but very few have been able to replicate the success of Silicon Valley on a scale that makes a difference, and even when they do, the city effectively splits in two — those within the new economy and those without — the same problem repeated in miniature like another snowflake of a Mandelbrot set.

The Keynes’ essay is rather long, so as a brief synopsis of his argument I would use the following analogy: Let’s say we live in a world of a 100 people of which a third work in agriculture, a third builds houses, and the remaining third makes clothing. Thus all the basic needs are satisfied and the world goes on like this for thousands of years. Then a smart guy, say John Smith, invents an awesome machine that does all the agricultural work automatically. The machine only needs one operator, so John produces the food for 99 people. For a brief time, 32 people are now unemployed, but thankfully another smart guy invents a new product, a bicycle, and now the unemployed little by little retrain and begin to produce these new items. In this analogy, Keynes agrees that human beings will always invent new needs to replace the things already automated, but his argument is that technology is automating things more rapidly than the speed with which we are able to invent new tasks.

Some key quotes from Keynes:

We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come–namely, technological unemployment. This means unemployment due to our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.

The prevailing world depression, the enormous anomaly of unemployment in a world full of wants, the disastrous mistakes we have made, blind us to what is going on under the surface to the true interpretation of the trend of things.

Now for my conclusion, which you will find, I think, to become more and more startling to the imagination the longer you think about it. I draw the conclusion that … the economic problem may be solved, or be at least within sight of solution, within a hundred years. This means that the economic problem is not-if we look into the future-the permanent problem of the human race.

Why, you may ask, is this so startling? It is startling because-if, instead of looking into the future, we look into the past-we find that the economic problem, the struggle for subsistence, always has been hitherto the primary, most pressing problem of the human race-not only of the human race, but of the whole of the biological kingdom from the beginnings of life in its most primitive forms.

Thus we have been expressly evolved by nature-with all our impulses and deepest instincts-for the purpose of solving the economic problem. If the economic problem is solved, mankind will be deprived of its traditional purpose.

Yet there is no country and no people, I think, who can look forward to the age of leisure and of abundance without a dread. For we have been trained too long to strive and not to enjoy.



The Great Hope of the 20th Century


Keynes wrote these big words in 1930 and in the next few decades ended up looking a bit ridiculous — just another hidebound Luddite fearful of progress — because after World War II, the years 1950-1990 brought the growth in employment never before seen. The process which created all these jobs that seemingly put paid to Keynes’ prediction was described by Jean Fourastié in his 1950 book called “The Great Hope of the Twentieth Century” in which Fourastié suggested that every civilization goes through a progression of three stages: Primary, in which the society is mainly engaged in natural resource extraction and agriculture; then Secondary, principally based on manufacturing and machinery; finally, when the machines automate all the manufacturing and extractive processes (including the production of machines), the civilization enters the Tertiatiary stage — dominated by services. Fourastié observed that this last transition is transcendental because in the service economy, you do not have enormous economies of scale that characterized the age of the machines — you cannot replaced the work of a doctor, a lawyer, or an accountant with a machine that can do work of hundreds as used to happen habitually when excavators, tractors, and cranes were replacing the manual workers of yesteryear. According to Fourastrié, the demand for services is practically unbounded and the availability of stable, good jobs is the Great Hope that would replace the turmoil and dislocation that convulsed the world in the first half of the 20th century. The now common term “Service Economy” and “Tertiary Sector” entered our vocabulary with the work of Fourastié.

Nowadays, about 70%-80% of the population is working in the service sector, and, just as Fourastié predicted, this new economy has brought us many professions that give us a better quality of life and a much higher level of self-realization compared to those we lost due to automation of assembly lines (here is a good reminder of what 1930s jobs were like).

By force of chance, the same year that Fourastié published his hypothesis, British mathematician Alan Turing published the seminal paper “Computing Machinery and Intelligence” in which he had laid down the foundation for the development of intelligent machines. In that paper, Turing wrote a sentence which, with time, may come to fulfill the vision of Keynes: In conclusion to the paper Turing states, “We may hope that eventually machines will able to compete with men in all purely intellectual fields.”

From today’s vantage point, we know that Turing was right and that the machines help us a great deal with all manner of intellectual work and improve our lives tremendously — I once asked an accountant what his life was like before Excel and I can’t even explain the expression of disgust on his face. However, recently the process has sped up so much that software is conquering even the highly compensated professions such as law, accounting, finance and many others. As the MIT economist David Autor explains, (Armies of Lawyers Replaced by Cheaper Software) the economy is becoming “hollowed out” because the jobs most affected are not the blue collar like plumbers, nor those of the highest level, such as CEOs, but instead the wide swath of white collar jobs in the middle of the compensation scale. The central pillar of Foursastié’s hypothesis was that service sector jobs were immune to technological leverage. Turing and the rise of computer science have changed that; it just took a while to notice.



The Magical Machine has Already Been Invented

Is this software invasion of white collar professions the first glimpse of Keynes’ prediction coming true? If he is right, the “economic problem” and the concomitant need for labor will disappear around the year 2030. In terms of age, we’re just about the grandchildren of Keynes and fairly soon we’ll get a chance to find out in person whether his crystal ball was any good. In the meantime, it’s worthwhile to see how we’re doing.


Says Keynes:

I look forward, therefore, in days not so very remote, to the greatest change which has ever occurred in the material environment of life for human beings in the aggregate. But, of course, it will all happen gradually, not as a catastrophe. Indeed, it has already begun. The course of affairs will simply be that there will be ever larger and larger classes and groups of people from whom problems of economic necessity have been practically removed. The critical difference will be realized when this condition has become so general …

It may seem that we are quite far away from this point, but, I think there is one aspect of human life where Keynes’s vision has been fully realized — in food production. The magical machine with which one person can feed a multitude of 99 was invented a while ago: in developed countries only 1.5% of the population is employed in agriculture, and this small group not only feeds the other 98.5% but also generates a great deal of food for export. This is the reality we are living right now, and this state of things would already be a vision of paradise for most of the countless generations who’ve lived and died on our planet. Despite all this, it is estimated that more than a billion people suffer from malnutrition and millions die each year from hunger. Harvard economist Amartya Sen won a Nobel Prize for showing, among other things, that modern day famines almost never happen because of lack of food, but for failure of political and economic systems in the affected countries. Thus we have a situation where the magical machine is drowning the world in food and we have a billion that is malnourished, while at the same time, for millions of poor people in the industrialized world obesity has become a key health risk.

Taking this as a model of what might happen, it seems that Keynes was right to worry and we are quite far away from being prepared for the day when our machines will print houses, clothing, and cars. What will happen when a large percentage of the population will no longer be employable because their labor has no possible economic value? As the least bad of the possible consequences, we will experience an ever more aggravated problem with public finances as nations will face the need to not only support the retired but also those who have been left on the permanent outside.

If we pursue Keynes’ reasoning a bit further out, there will be a point where fundamental needs such as housing and clothes are produced by machines at such low cost that a basic version of these goods and services would be available for a negligible price. This is what Keynes calls a critical point where the condition of the absence of needs has become general, and the gradual transition is already on the way. It may seem far fetched that the cost of a house could one day be negligible, but yet with some products it has already happened: In many of the world’s poorest areas, like Brazilian favelas, you will see people talking on mobile phones and dressed in t-shirts and jeans. This is largely possible because machines have made these products so cheap that slightly outdated models are effectively free; often consumers even throw them away because they have no resale value. It was not always like this: just 15 years ago, mobile phones were a luxury available only to the affluent Western consumers, and if we look 200 years ago, shirts were so expensive that people used detachable collars to make them last longer. Now this item can only be found in a museum.



Vision 2030: Keynes or Fourastié?

So will Jean Fourastié’s Great Hope continue to spring in the 21st century? Will Keynes’ concerns prove to be misplaced by the next economic boom as they already were several times in the past? In the short term, I would say the answer is yes and once again the disappearing jobs will be replaced by new ones. In the long run, however, the trajectory will likely favor Keynes. The cornerstone of Fourastié’s theory is that the Service Economy is not subject to the forces of technological leverage. Alan Turing’s invention has permanently changed that assumption, and it is only a matter of time until software automation spreads to an ever broader spectrum of professions.

How will we adapt to this world? Personally, I think there will always be things to do even beyond the critical inflection point envisioned by Keynes. As he himself notes, “absolute needs of humans are finite, but positional needs are probably infinite.” When everybody has a yacht, people will want spaceships; when everyone has a spaceship, people will want private asteroids equipped with suspended low gravity mini oceans. Our creativity to invent new needs is not in doubt. It’s just that, if Keynes is right, at some point this century the invention of new needs may be one of the few jobs that our machines won’t yet be able to do.



Andrei Vazhnov
therootofborscht@gmail.com


Bibliography

1. Keynes, John Maynard 1930, “The Economic Possibilities For Our Grandchildren”

2. Fourastié, Jean 1950, “The Great Hope of the Twentieth Century”

3. Turing, Alan 1950, “Computing Machinery and Intelligence”
In this paper, Turing examines the possibility of constructing intelligent machines.

4. Russell, Bertrand 1932 “In Praise Of Idleness”
A similar essay by Russell, probably to some extent inspired by Keynes.

5. John Markoff, New York Times, Armies of Expensive Lawyers Replaced by Cheaper Software

6. Andreesen, Mark, Wall Street Journal “Why Software Is Eating The World”


Written by Andrei Vazhnov

October 14, 2012 at 2:18 pm

Digital Delivery of Physical Goods



I’ve had several interesting conversations in response to this post. One of my friends asked the following:


3D Printers are just another way of automating manufacturing. We have been automating manufacturing for decades, and if you go to a modern automaker plant it is mostly automated already. Why is this going to be any different?

So this got me thinking about what makes 3D printing different from other forms of automation. First, I agree that, on a very general level, a computer controlling an assembly-line robot is not that different from a computer controlling the head of a 3D printer. However, the important distinction is that the movements of an assembly line robot are not generic and do not follow a well-known standard. By contrast, a 3D printer can in principle produce any configuration of matter in space based on a standardized digital format.

Therefore, with 3D printing, the Internet becomes a distribution channel for physical goods. You no longer need to have access to shipping lanes or retail channels, and, most importantly, you do not need to have a lot of working capital to be tied up in the inventory. This is revolutionary because any talented furniture, automobile, or fashion designer will be able to produce an amazing design and have a million or a billion people print out for themselves, paying just for the raw materials and a small margin for the operator of the 3D printer. As a result, it will bring in a lot of new players to the table since most people do not have working capital to produce a million items of their design in a factory and manage the associated inventory and distribution mechanisms.

Also, it’s important to keep in mind that since plants are already very automated and require little manual work, a large component of labor content of creating, say, an automobile or a piece of furniture is in design, management, and marketing. Since there is an enormous social incentive to be the person who created this season’s summer fashion ensemble or a popular piece of furniture, there will be a lot of talented designers willing to do it for free. This is similar to the process by which Open Source software was developed where peer recognition is a key motivation. Because of this, a large monetary component of labor content disappears and costs become even lower.

Second, I think on some level we are inclined to view 3D printer as an evolutionary development rather than a game-changer partly because we are very used to the concept of “printer.” We have all grown up with it, and, furthermore, when 2D printers first appeared, there was no great revolution — people just started using more paper. But we had the 2D printer since well before the digital age. When the printing press was invented in the 15th century, there was in fact an enormous societal change — this was the beginning of the Age of Enlightenment which brought us the Industrial Revolution and continues to this day. What was the essence of the original 2D printer? It was the marginal cost of making copies became negligible, at least compared to the vast labor of copying books by hand. Someone may counter that the invention of the printing press was an information revolution and thus is more appropriately analogized to the Internet, and I would agree; however, my general point is that when the marginal cost of a large class of goods becomes negligible, usually big changes happen. I think the vortex is one possible scenario, but there can be many others.

Written by Andrei Vazhnov

February 10, 2012 at 12:27 am

The Horn of Plenty: 3D Printers and the Future of Manufacturing


“It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with a tape on the beginning of which is written the standard description of some computing machine M, then U will compute the same sequence as M.”
— Alan Turing, 1936


Any sufficiently advanced technology is indistinguishable from magic.
— Arthur Clark, 1961



Contents:
The Shape Shifter
The Magic Will Be Effective and Universal
Turing’s Genie Escapes the Digital Bottle
What Can They Do Now?
Fabrication Without Limits
Music, Painting, and the Vortex of Infinite Leverage


In classical mythology, Zeus, the ruler of Olympus, did not have it easy growing up. One could definitely say that he was one of the originals who “came up the hard way” since he spent most of his childhood hiding out in a cave because his father, Chronus (Time), had a decidedly ungodly predilection for devouring his own young.

The only food available to the future thunder-thrower was milk from the goat Amalthea who nourished him. Even at that early age, Zeus was a strong young fellow, and accidentally broke off one of Amalthea’s horns while playing around one day. The horn then acquired the magical power of providing its user with unending bounty and ever since has been the symbol of abundance that is still used today in holidays such as Thanksgiving.

Several thousand years had to pass before humanity took its first step towards getting us one of these. That step was made in 1936 by Alan Turing when he invented the concept now known as the Universal Turing Machine. His discovery has transformed and defined our lives so much so that we simply can no longer see it for the remarkable flash of genius that it was 75 years ago.



The Shape Shifter

To see exactly what it was that Turing had invented, imagine you were talking to Thomas Edison a 100 years ago, and you pulled from your pocket a device whose small screen said, “Thursday, August 25, 10:36am.” Without a doubt, Thomas Edison would be really impressed; he would probably say “Wow, such an amazing tiny clock!” And what would probably impress him most is that the clock numbers would change without the hum of any little built-in motor or any moving parts at all…

And yet that would be the least impressive thing about your pocket companion. At the flick of your finger, the same exact object would first turn into a Calculator, then into a Camera, into a Telephone, into a Music Player, into a Television and into dozens of other things. In fact, “dozens” does not even begin to scratch the surface — your Smart Phone can run hundreds of thousands of different programs — and each of them would look to Thomas Edison like a entirely different invention. “There is an app for that!” slogan of iPhone aficionados would leave him totally befuddled and depressed — he’d probably throw in the towel and just retire right then and there before you completely put him out of business with your digital witchcraft.

As we go about our day, we take it for granted that the same physical object can be thousands of different things at the same time — and that is the essence of Turing’s invention. In fact, every smart phone, every netbook, every desktop is a direct descendant of the Universal Turing Machine defined in his 1936 paper. To anyone who did not grow up with these things this ability to shape-shift the same object at the click of the button would be magic at its utmost — far more so than TV, or electricity, or cars were in their day. That we think this is the most natural thing in the world is only a sign of how the idea of Alan Turing forms the very foundation of our lives and our economy. But what was the idea that made it all possible?



The Magic will be Effective and Universal

Before Turing, logicians have long struggled to define the notion of “effective process” — a process by which a person could perform logical reasoning by blindly following simple rules that he or she does not have to understand. The reason this is important is that logic aims to be objective, and this goes to the heart of what “objective” means: if any person, regardless of training or education, could prove a theorem by following a series of simple rules that everyone agrees on, it is fair to say that the theorem has been “objectively” proven. It is objective because it does not rely on understanding by a specific visionary mathematician and can be verified by anyone for themselves. The word “effective” here has its original connotation that stems from the Latin efficere “to accomplish,” so the logicians were simply searching for a method in which person could reason through doing, through performing simple actions.

Turing reflected deeply about what humans go through when they think and when they do, and he found the solution whose simplicity belies its vast power and generality. He realized that the simplest action is one that changes the physical world in one specific location, and in the simplest possible way. He symbolically represented this simplest action as writing “0” or “1” on a tiny square of a paper tape. He represented the person following the rules — the doer — as a simple machine with a mechanical head that can erase the “0” and replace it with “1” and vice versa. The machine also could move the tape to the left or to the right, which was analogous to when a person moves to perform an action in a different part of the world.

Turing then proved that any set of actions, no matter how complex, can be represented by this type of simple machine as long as you fed it enough tape to write its 1’s and 0’s. In short, what Turing had discovered was — in a way of speaking — a quantum of action.

This by itself was an important breakthrough, but then Turing saw something else — he saw that not only the calculations but also the machine that is performing them can be represented as 0’s and 1’s on the same tape. This led him to a stunning realization that you do not need a more complex machine in order to perform more complex tasks — that it is possible to construct one single machine that could do absolutely any task regardless of its nature. It is worth spending a moment reflecting on how strange this discovery truly is since it runs counter to all of human experience. For example, if you want to write something down, you need a simple instrument, a pencil, but if you want to construct a building you need all kinds of complex tools — cranes, bulldozers, etc. Before Turing, it was clear to everyone that the more complex the task, the more complex the machine or the set of instruments had to be; today it is just as obvious to us that if I want stop play Solitaire and start editing family photos, I do not need to head to a photo supply store to get a different machine.

That this should be so, that the infinite variety can be grasped by a single finite machine has always been a source of mystery to me. This finite machine, technically known as the Universal Turing Machine, is the idea at the heart of any modern computer, and if we read the prophetic sentence of Turing’s 1936 paper translated into modern terminology we can see the inklings of our world — the glimpses of PCs, of smart phones with their endless variety of apps, of simulated environments in Video Games, and even the venerable pop-culture milestones such as the Matrix in which one reality was simulated in another.

1936 2012
“It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with a tape on the beginning of which is written the standard description of some computing machine M, then U will compute the same sequence as M.” “It is possible to invent a single machine which can be used to execute any software program. If this machine is supplied with a tape/hard disk with the the standard description of some software program M, then U will behave as if it were M.”


In Turing’s original model, the paper tape represented what we now call data storage, and the head that changes 0’s to 1’s served as the model of what would later become the CPU. Of course, the humble tape has now been replaced with multi-megabyte memory chips, and modern CPU’s change billions of 1’s and 0’s per second, but it is hard to overstate how the basic concept remains thoroughly identical to that of Turing. Through the discovery that one single machine could simulate countless others, Turing connected the finite and the infinite right here in our physical world. Today, we hear routinely about studies in which scientists used software to simulate the evolution of a hurricane over the next month, the changes to the ocean’s ecosystem for the next 100 years, the evolution of the galaxy over the next billion years or even the future of the entire Cosmos. Through the concept of one machine simulating another, Turing made very real the essence of William Blake’s famous line “To hold infinity in the palm of your hand and eternity in an hour”



Turing’s Genie Escapes the Digital Bottle

Today, three quarters of a century after Turing’s paper, many of the largest, most valuable corporations either make Universal Turing Machines (Apple, Samsung, Dell) or write programs for them (Microsoft, Oracle, Google, Facebook).

Yet, as amazing as the computer revolution has been, it is only half of the story that began with Alan Turing’s 1936 paper: In the 20th century, Turing Machines were mostly confined to working with information — your computer could display things on screen or write them into a file, but it could not create things in the physical world directly. Recently that has begun to change.

The second half of Turing’s story is the coming age of the 3D printer which will make the Horn of Zeus look like a rusty Lada next to a Lamborghini. For while the original Cornucopia only made food, Turing’s version will soon be able to provide not only nourishment but pretty much everything else: furniture, clothing, automobiles, and even a replacement for your kidney when you need one.

If you like a couch or a table at your friend’s house, one day you will be able to scan its bar code and have one just like it printed at a local fabricator office. If you like someone’s outfit in a TV show, you will be able to customize it from your Apple remote and have it sent to you from a fabricator down this street or just print it out right at home.

In fact, the name “3D printer” sounds so mundane as to mask the true importance of the technology. It is a combination of the now banal concept of “printer” with the “3D” prefix that is often evocative of various gimmicky technologies such as 3D televisions. Perhaps a better name would be The Universal Turing Fabricator to better reflect the magnitude of the change that will unfold in the next few years.



What can they do today?

I first read about the possibility of programmable fabrication around mid-nineties in Eric Drexler’s remarkably prescient 1986 book “Engines of Creation.” Ever since, news items regarding these technologies attracted my attention, but for a long time they never seemed more than curiosities that you would display in a science museum. In the last few years, however, practical advances started coming in rapidly. For instance, here is a gallery of user-created 3D objects at shapeways.com. Right now you can digitally design an object on a computer and this service will print it out and mail it to you. Not only that, it can also mail to some else for a fee and deposit the money into your account the same way an iTunes purchase sends the royalties to P.Diddy.

Still, it is one thing to 3D-print funky costume jewelry or bookshelf knick-knacks, it’s a whole different ball game to be able to print a car, a kidney, a nice BLT sandwich. How long will it take for the technology to mature to that point? Will it even ever be possible to print things with complex interior structures such as living tissue or electronic components? The answer to this question is that in prototype form all this has already been achieved, and, as recent history proved on numerous occasions, what is possible in a lab today is commercial reality a few years down the road. Here is just a small sample of what you might expect to see at your local copier office towards the end of this decade:

Cars Watch Video PCWorld: 3D printed cars may be the way of the future
Kidneys Watch Video
TED where the surgeon prints a kidney on stage
Surgeon Prints New Kidney on Stage
Guitars Watch Video
Shows the process of printing the guitar to the soundtrack of that that guitar!
World’s First 3D-Printed Guitar
Teeth Watch Video Dental prosthetics market on the rise, boosted by 3D printing
Food Cornell Fabrication Lab Makes Edible Objects With 3-D Printer
Bikes Watch Video



Fabrication Without Limits

When you watch the videos of this technology in action, it is clear that it is a bit clumsy, still in its early stages. It also seems almost trivial — just as a regular 2D printer places ink dots on paper, a 3D printer deposits layers of material one upon another according to the instructions from the computer. Doesn’t really seem like a big deal. However, the power of this invention derives not from the printer itself, but from the Universal Turing Machine that animates its motion. And, as we have seen, there is no limit to the complexity of what a UTM can create.

Turing himself was aware of how difficult it is for the mind to accept this infinite potential. In his classic 1950s paper which remains one of the most cited works in philosophy, he commented as follows regarding objections that there are some things machines will never be able to do, wryly noting that they mostly stem from the unconscious use of “scientific induction.”

No support is usually offered for these statements. I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behavior of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general.

When it came to the future potential of what machines can do, Turing was mostly interested in creating Artificial Minds. He wrote a chess playing program when there were no computers that could run it and personally simulated the computer with pencil and paper, taking half an hour to calculate each move. At the time, it seemed like an eccentric academic exercise, and yet Turing remarked in his paper:


We may hope that machines will eventually compete with men in all purely intellectual fields.

His pencil-and-paper program lost, but 45 years later a real computer defeated Garry Kasparov, the highest rated player in the history of the game, and nowdays machines already successfully compete with humans in many fields: google does a passable translation from almost any language, email and SMS deliver most of our communications, Siri is taking over the secretarial jobs which were already in decline due to wordprocessing and groupware, Amazon and google books supplanted most functions that used to be performed by libraries, e-commerce is replacing traditional retail jobs — the list is quite long.

With the arrival of 3D printers, it seems that only one correction needs to be made to update Alan Turing’s quote above for 2012 — remove the words “purely intellectual” since it is rapidly becoming possible to build a single machine that can create any possible physical object and bring the unlimited power of the UTM into the world of manufacturing.



Music, Painting, and the Vortex of Infinite Leverage

In all ages past, up until the 20th century, if one had a gift for painting, he or she could earn a decent living making portraits for people who wanted to have a remembrance. In fact, many of the old masters we admire in museums today got their income mainly from commissioned portraits. With the arrival of the camera, the vast majority of these jobs disappeared almost instantly while the few who were left standing did so by reinventing the profession. It is not a coincidence that the 20th century art is drastically more abstract; it’s not that people all of a sudden became more creative — there was simply no value left in re-creating reality on canvass. In a cognate development, the “starving artist” is a relatively modern concept reflecting the fact the traditional economic basis of this profession had largely disappeared.

Similarly, there used to be good money in being a musician: you wanted to enjoy Chopin before turning in, you had to hire local professionals to come to your house and provide you with that service. With the development of a record — quite literally a 2D imprint of sound — no one wanted to be inconvenienced by having strangers over to perform music when you can have the same experience for a small fraction of the price.

The fact that portrait painting and musical performance have disappeared as a significant source of steady employment does not imply that the demand for these services has decreased — on the contrary, people take more portraits and listen to more music than ever before. It’s just that all the economic benefits of this value chain now accrue to the top 100-500 people in each profession (e.g. Elvis Presley, Brangelina), not to hundreds of thousands as used to be the case in the old times. This is a natural choice for most people — why would one want to hear the music or look at the paintings from the mediocre artists and musicians who live nearby if one can listen to the very best person in whatever genre that he or she happens to enjoy?

Every time marginal costs fall to nothing, a vortex of infinite leverage forms. The value chain that was heretofore distributed is now funneled to the people at the very top of the profession — almost everyone else becomes a hobbyist. This has already happened in the transition from painting to photography, from live music to radio and records, from theater to movies and television.

This is what Marc Andreesen referred to when he used the provocative phrase Why Software Is Eating the World. He argues that the lofty valuations of technology companies do not represent a bubble. According to him, not only there is no tech bubble, but some of the companies like Facebook, Groupon, Skype, if anything may be valued too low since they could end up vacuuming the value chain from entire industries as Amazon is doing with retail and publishing and Apple did with music. As an investor in these companies, Andreesen may be viewed as biased towards optimism, but I think his larger point is correct: Software represents a vortex of infinite leverage on an entirely new scale and it is only a matter of time before what happened to music, painting, and theater happens to all of manufacturing businesses.

Why must this be so? We can again look to Turing’s insights for the answer. Note that the terms “program” and “programmer” did yet not exist when he was writing it, and he used “instruction table” for what we now call “program” and “mathematician with computing experiences” for what we call “programmer.”


Instruction tables will have to be made up by mathematicians with computing experiences and perhaps a certain puzzle-solving ability. There will probably be a great deal of work to be done, for every known process has got to be translated into instruction table form at some stage.

The process of constructing instruction tables should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.

I think today the software is still regarded as an “industry” in its own right. But that is an incomplete view of the situation: Software is to business what mathematics is to science — it is a language of description and every known business process will eventually be translated. As Turing’s quote above suggests, software is stories we tell our machines so that they could liberate us from the work that is mechanical and mundane — i.e. from the type of activities that we generally call “work.”

It also reflects something else that is fundamentally different about software — you never need to build anything twice: If you have one bridge in your town and you need a second one in a new location, you have to hire the construction crews again. If you need Microsoft Word at a new location, you do not need need to hire programmers to build you another one.

This means that, once the 3D printing technologies are perfected, a giant vortex will form into which all of manufacturing will rapidly disappear, and just as happened to musicians, all the value in manufacturing will be funneled to star industrial designers who will be known by name like Jony Ive at Apple today. A residual amount will go to the manufacturers of 3D printing technologies and robotics, though like the CD-player makers of today they will eventually become low-margin businesses as the technology matures.

How will this transformation compare to its historical antecedents in music, theater, and painting?



Will take a look at this in the next few weeks.

Bibliography

1. Drexler, Eric 1986, “Engines of Creation: The Coming Era of Nanotechnology”
This book popularized the notion of nanotechnology and influenced a lot of science fiction writers. However, it also has a lot to say about many other interesting futuristic topics.

2. Turing, Alan 1936, “On Computable Numbers With Applications to Entscheidungsproblem”
This is Turing’s original paper that defined the concept of stored program computer and started the road to the digital age.

3. Turing, Alan 1950, “Computing Machinery and Intelligence”
In this paper, Turing examines the possibility of constructing intelligent machines and describes his now famous “Turing Test” whereby a machine should be considered conscious if it can convince a human judge that it is human. This is one of the most cited in philosophy because Turing was the first one to try to define rigorously the answer to the question “What is mind?”

4. New World Encyclopedia article on Alan Turing. http://www.newworldencyclopedia.org/entry/Alan_Turing

Version 1.0
Andrei Vazhnov
therootofborscht.wordpress.com

Written by Andrei Vazhnov

February 7, 2012 at 10:21 pm

Anthology and Anthosphere



On several recent occasions, I came across the word anthology and noticed that I had no idea what its constituent parts meant. I found this a bit unusual since most of the time, when confronted with a word that ends in “-ology,” you kind of know subconsciously where it comes from: geo-logy is the study of “geo” which has something to do with Earth, bio-logy studies the living, and anthropo-logy is a study of humans. But “anthology” gave me pause since I suddenly realized that I could not figure out what “anthology” was a study of. Furthermore, the meaning of the word — a collection of best written works — seemed not to really be about studying anything. So I decided to look up its origin.

To my surprise, I discovered that unlike geology, biology, or anthropology, the word anthology has a very different history. Most of the “-ology” words derive from the ancient Greek root “logos” which means word, reason, or discourse. However, the word “logos” itself comes from the Greek verb “legein” which has two meanings: “to speak, to say” (from which “logos” originates), as well as an older meaning — “to pick, to gather.”[1]

It is from this second meaning of “legein” that we have anthology which is the fusion of “anthos” — flower and “logia” — gathering. So an anthology of best written works is a collection of flowers, which I thought was a very apt and beautiful metaphor for the concept.

flowers

For their part, the verb “legein” and its Latin cousin “legere” derive from the Proto-Indo-European root *leg whose primary meaning is “to pick, to gather, to discern,” and it is a very prolific root in all Indo-European languages. There are, of course, many well-known offspring through its later “logos” brand (e.g. dialogue, monologue, epilogue), but there are also many words originating through its humble primary meaning “to pick, to gather” — some examples are: to select, to elect, to collect, to catalogue. An interesting offshoot is the word “legion” which is a “gathering” or a “collection” of soldiers; just as a “legend” is a collection of stories for reading.

If you imagine some Bronze Age tribe picking berries and mushrooms and coming up with an uttering to denote this simple activity, it is quite awe-inspiring to realize that the grain of that basic idea — choosing some items but not others — is still present when we participate in elections, read anthologies, and go to lectures about Roman legions. So many layers of abstraction!

I also looked through the other (anthos) branch of anthology’s etymological tree, but regrettably on that side the pickings are slim; the only non-botany word that I could find is Anthotype which is one of the precursors of modern photography so named because its photosensitive material was derived from flower petals.

I for one think that this dearth of anthos-based words is a waste of a perfectly great root, so to revive its use I propose Anthosphere — the collection of all the world’s flowers (by analogy with biosphere) and Anthomania — having a passion for flowers 🙂

(Disclaimer: I have absolutely no knowledge of Latin or ancient Greek so all of this is based on my reading dictionary entries listed under Bibliography section below)



End Notes

[1] Despite searching for a long time, I could not find a single accepted theory on how “legein” from its original meaning “to gather, to pick” eventually came to mean “to read, to speak” but a couple of explanations that made sense to me are (a) in the early, primitive act of reading, when it was just being invented, people had to painstakingly discern symbols and gather them together to form words. (b) The stories that people told to each other before the invention of writing were usually collections of their experiences so when you told stories you transmitted collectively “gathered” body of knowledge.



Bibliography

The two sources used for this post are the corresponding Word Origin & History entries on dictionary.com (which is what initially piqued my interest) and the explanation of the root in the American Heritage Dictionary of Indo-European roots edited by Calvert Watkins.

http://dictionary.reference.com/browse/anthology
1630s, from L. anthologia , from Gk. anthologia “flower-gathering,” from anthos “a flower” (see anther) + logia “collection, collecting,” from legein “gather” (see lecture). Modern sense (which emerged in Late Gk.) is metaphoric, “flowers” of verse, small poems by various writers gathered together.

http://dictionary.reference.com/browse/legion
c.1200, from O.Fr. legion “Roman legion” (3,000 to 6,000 men, under Marius usually with attached cavalry), from L. legionem (nom. legio ) “body of soldiers,” from legere “to choose, gather,” also “to read” (see lecture).

http://dictionary.reference.com/browse/lecture
late 14c., “action of reading, that which is read,” from M.L. lectura “a reading, lecture,” from L. lectus , pp. of legere “to read,” originally “to gather, collect, pick out, choose” (cf. election ), from PIE *leg- “to pick together, gather, collect” (cf. Gk. legein “to say, tell, speak, declare,” originally, in Homer, “to pick out, select, collect, enumerate;”

The full root explanation is available on Google books and you can find it here:
http://books.google.com/books?id=4IHbQgz1nZYC&pg=PA47#v=onepage
On page 47, see entry for leg-



Andrei Vazhnov
therootofborscht@gmail.com

Written by Andrei Vazhnov

January 31, 2011 at 10:06 pm

World As Text


Imagine that due to a congenital condition you were born blind and lived the first twenty years of your life without ever knowing what seeing feels like. It would be wrong to say that you were living in the “darkness” — the term darkness simply means the absence of light — to you, everything that has to do with light is an unfamiliar sensation, something you have never experienced. One day, thanks to advances in medicine and science, a breakthrough operation restores your sight. You open your eyes and for the first time see the sun, the faces of friends and family, the doors, the chairs, the windows, the flowers…

Except that none of this would happen. In fact, you would experience a strange, overwhelming, disorienting sensation. It would be much like if you suddenly started “seeing” the Earth’s magnetic field the way some birds do or feeling the electric capacitance of nearby objects (an ability common to sharks and other species). The new sensation is so disconcerting that people whose sight was restored in adulthood often have to wear blindfolds for a very long time and, in some cases, even have the operation reversed.

Restoring sight to people whose blindness is inborn is a very different challenge than restoring sight to those who lost it due to accident or illness. In one case, extensively documented by MIT scientists, a woman regained her sight after a successful surgery to remove dense cataracts that obstructed her vision since birth. It took her six months to learn to recognize the faces of her siblings and more than a year to recognize everyday household objects. In fact, most ophthalmologists believe that, past a certain age, treatments of congenital blindness are unlikely to succeed even if the physical sight is restored.

The reason for this is that the brain of someone who has never experienced sight is simply not wired to interpret the incoming new signals, much like your brain is not wired to sense the magnetic field or electric capacitance. Though the eyes see the incoming light, the mind does not how to make sense of it. To quote philosopher Henry Bergson, “the eye can only see what the mind is prepared to comprehend.”


What does the following quote say?

thera inins 
pains taysma 
inly int
hepla in


It might take a couple moments to figure it out, but if you mentally remove all the spaces it would soon become clear. Note that all the information was there to begin with, and your eyes saw the same exact black-on-white sguiggles even before you understood what the quote says. The physical world stayed the same; the only thing that changed is your mind’s point of view, the way it groups the squiggles.

Similarly, a congenitally blind person whose sight was restored, let’s call her Jane, sees the same faces, chairs, and windows as the doctors who are in the room, but her mind does not know how to parse what it sees. Just like you had to figure out where the word “rain” ends and where the word “spain” begins Jane has to slowly and painstakingly learn where the object “table” ends, and where the objects “wall” and “vase” begin. There are many moving lines, spots, stripes but Jane’s mind cannot tell whether a particular dark spot is a power outlet on the wall, the hair of a doctor walking down the corridor, or a fly zooming close by.

Is it possible that you, the sighted person reading this, are in the same boat? That, in fact, you are seeing lots of strange moving squiggles, and your mind tells you a story that you take as the apparent reality? Extend your arm as far as you can and look at your thumb. While staying focused on your thumb’s nail, without looking away, slowly become aware of what your eye sees on the left and on the right. You will find that the focus of your vision does not extend more than a foot on either side of your thumb; everything else is very blurry. Then, of course, there is a giant invisible blind spot right in the center of your visual field. [Try this!]. And, last but not least, consider the fact that everyday objects look entirely different depending on which angle you look at them. For instance the 2-dimensional images that a chair projects onto your retina will be completely different when you look at it from the above versus from the side. Geometrically, the two figures will have very little to do with each other. (While we are at it, your eye also sees everything upside down, with the brain providing a helpful image-flipping service after the fact.)

So, where does this bring us? As you walk around what you actually see is a kaleidoscope of rapidly morphing projections of geometrical shapes, all seen within a narrow stripe of visual field about a foot wide, amidst barely intelligible blur, upside-down, with a big blind spot in the center. And yet, unless you stop and perform these experiments everything looks in perfectly sharp focus, with no gaping holes, and a chair is recognizable whether it looks like a circle when you are about to sit down or two parallel lines when you look at it sideways, from far away. It seems that your mind is able to parse all this chaotic input stream and paint a nice smooth picture that we take to be our reality. But how does it know what to look for in the first place? That there are chairs, and tables, and windows to be seen?


When we learn a foreign language in school, we open our dictionaries and learn the names of things in the other language. It seems obvious to us that chairs, tables, and windows already exist; we just need to know what those other guys call them. We can never know what it was like to learn our native language, so we assume that it was sort of just like that — that we learned names for things that were already out there. But herein is the self-referential paradox: before you knew what a door was, how would you even know that it was a separate object from the wall that enclosed it? Or even what a “wall” is, for that matter? The door came into existence for you when you first learned its name; in learning the word “door” for the first time, you learned to parse the world in a particular way, a way that was useful for a biological creature of your size and needs.

The Earth moves around the Sun at 67,000 mph, but we don’t notice it because we’re on it. Similarly, we don’t notice that we parsed the world in a particular way — just one among many — because we live embedded in a linguistic environment of creatures who are biologically similar. To us, water is a distinct entity endowed with many properties; to a fish, water does not exist because it is embedded in it. And what is water anyway? What do a rainstorm, a puddle, an ocean, and a shower have in common? We only begin to recognize it as a single something when we learned that the word water referred to all these seemingly unrelated experiences.

Nothing illustrates this function of language more powerfully than the story of Helen Keller. Left deaf and blind by a childhood illness, and isolated from language for many years, she vividly remembers what it was like to use words for the first time, when her new teacher, Anne Sullivan, taught her the meaning of the word water.

We walked down the path to the well-house, attracted by the fragrance of the honeysuckle with which it was covered. Some one was drawing water and my teacher placed my hand under the spout. As the cool stream gushed over one hand she spelled into the other the word water, first slowly, then rapidly.

I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten — a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that “w-a-t-e-r” meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, hope, joy, set it free! There were barriers still, it is true, but barriers that could in time be swept away.I left the well-house eager to learn. Everything had a name, and each name gave birth to a new thought.

As we returned to the house every object which I touched seemed to quiver with life. That was because I saw everything with the strange, new sight that had come to me. wikiquote

Our ability to see water, chairs, tables depends on our mind’s having the prior idea of what these things are. And these ideas come from language — from the distant time most of us don’t remember, when, like Helen Keller, we first realized that the “wonderful cool something” was water. Your mind, among other things, is a fine-tuned device for parsing the world into objects and entities in a way that is useful for your daily life.


Look at the letters above. If you imagine that you’ve never seen these symbols before, you’d probably agree that, visually speaking, they don’t look very alike. Yet, when you read a book, you never even realize that they look different from one book to another. Do you remember what the letter “R” looked like in the last work of fiction you read? Was it san-serif or not? Unless you are in the business of creating fonts, your mind never registers these differences even though they can be large indeed; the mind just sees the concept of “R” directly.

What makes a chair a chair? Is it something that has a seat, a back, and four legs? No, a chair could just as easily be a something with three legs, with one big leg, with a back or without a back, or even just a suitably large cylinder with a soft upper surface. Just like your mind reads the concept of “R” and skips the irrelevant details, your mind also has the idea that a chair is a device for sitting, and in your daily life you don’t even notice what a particular chair is like — you just sit down. More importantly, if you take the cylindrical chair you just bought, and instead of placing in it in your living room, you turn it upside down and make it a part of an abstract sculpture with lots of other odd-looking contraptions, the passers-by wouldn’t even realize that this is a chair. You take it out of context; it stops being a chair.

This is exactly analogous to taking a word out of context. If you just say the word gathering it could refer to a meeting, to the gathering of demographic data by census workers, hunting and gathering, and many other things. In other words, it’s meaningless just like our cylindrical chair lying on its side in a landfill. Only the context of a sentence, “They are gathering rosebuds” can give it meaning.

This analogy is not coincidental: It was through language that we first created our world by choosing which parts of to distinguish as separate objects, entities, concepts. We have given them names and brought them into existence by choosing to parse the world this way for the rest of our life. The central role of language does not diminish as your grow older — it just recedes into the background. As we walk about our day, if we see chairs and tables, it is only because they are embedded into sentences of rooms and houses and offices. The world is a text that we cannot stop reading.

When we look at the map of the world we see continents and rivers and oceans. Is what we call “Pacific Ocean” real? Real in a sense that it is something that exists independently of humans? Well, it is real in that the body of water it refers to exists independently of the human observer. But the fact that we chose to take a specific part of Earth’s water and call it “Pacific Ocean” is an entirely arbitrary way to distinguish a particular aspect of the world and give it a name. On further reflection, Pacific Ocean is not even a “part” of Earth’s water — it is not separate from the other “oceans” in any meaningful way. If anything, it is just a complement of the contours of the continents that form its boundaries. For a whale or an octopus, for whom the Earth’s water is just one big watery “continent”, there are, no doubt, other ways to parse their world into parts that are meaningful to them, but, whatever they are, they have nothing to do with our oceans.


Our world, then, is neither real nor imaginary. It is rather like the famous parable of the Blind Men and the Elephant with one crucial and subtle difference: in the parable there is a privileged perspective — that of the sighted person. As a sighted person you think that, presumably, you know what the elephant is really like — that your knowledge is superior to that of the blind men who can only examine by touch. But there are equally important parts of the elephant that you do not see, such as its digestive tract, its limbic system, its group behavior. A biology professor specializing in elephants can, with the same justification, claim that your knowledge of the elephant is as limited as that of the proverbial blind men, that you only perceive the superficial details immediately accessible to your senses. In fact, a blind person, with much more refined sense of touch, hearing, and smell, may well perceive important non-obvious characteristics of the elephant (e.g. the rhythm of its breathing) that the sighted person would overlook.

When it comes to the universe as a whole, there is no privileged perspective. Each new creature that walks or swims this planet brings its own unique way of parsing the world into meaningful objects and entities — the elephant is getting painted in more and more detail in an unfolding process that has no end. In a very real sense, different beings create different worlds, irreducible to one another.

It may seem as though this is just empty scholastics: that in reality there are things out there that are solid like concrete or soft like water, transparent like glass or opaque like carton. But to a sonar-wielding bat, the glass is no more transparent than the carton, and to a hypothetical alien consisting of an electron cloud the Earth’s invisible magnetic field may appear solid like concrete. There is no meaningful way in which our perspective is privileged — if it seems more real, it’s only because we are the ones who created it.








Bibliography

Bergson, Henri “Creative Evolution”
Bortoft, Henry “The Wholeness of Nature”
Edelman, Gerald “Wider than the Sky: The Phenomenal Gift of Consciousness”
Yuri Ostrovsky, Aaron Andalman, Pawan Sinha “Vision following extended congenital blindness”
Spencer-Brown, George “Laws of Form”



Andrei Vazhnov
therootofborscht@gmail.com

Written by Andrei Vazhnov

April 5, 2010 at 2:48 pm