the analog tax

spending time with my parents affords me an opportunity to see what they use their computer for, and some of it is not pretty: take this whole world of “mail merge”, no matter whether in microsoft office or it’s retarded cousin openoffice, is a world of pain. the user interface is unspeakably bad, quite in tune with a process that is about as fun as a visit to the dentist in the first place. bringing this bloated world onto the web with the recent craze of ajax word processors is fundamentally misguided: why deal with label printing when there is email? similarly, when you are faced with the task to protect 2.5B in revenues per quarter, why screw around with new toolbars when your products don’t help squat to solve the real problem: outdated assumptions about a paper-based world.

preparing for the post-sourceforge world

People have asked the question “What if SourceForge disappeared?�? for years now, but I have to wonder if we should be asking this question again. Now, SourceForge has its warts, but it’s ultimately a beneficial service. And, even if they did disappear, it’s highly unlikely that the open source movement would be handicapped for any real length of time.

But here’s why I ask the question:

phil goes into some more detail, wondering whether GOOG or YHOO might be prepared to take over. Maybe the woes of sourceforge can bring some long-needed fresh air though:

I think that the problem with SourceForge is that they are providing 1999-era functionality based on a business model that really is not much more than an afterthought after the collapse of their hardware business. Consequently, the core functionality in the SourceForge project hasn’t changed all that much in the past six years. All the projects on SourceForge are effectively partitioned… we don’t see any tools for figuring out code reuse possibilities or anything particularly innovative.

3 years ago, i researched the state of the art of open source production, and developed a matrix to map activities, actors and tools. To say that there are many areas of improvement in the way open source software is produced is an understatement. The obvious observation that there are a power laws in effect with regards to quality and popularity of a project makes me wonder what can be done to improve life for the countless small projects out there that have neither their act together code-wise, nor any audience. A recent study found that 81% of sourceforge projects are inactive, and only about 0.05% innovative.

A considerable subset of these projects deserves to do better on both fronts with the right tooling. Done correctly, a post-sourceforge integrated site could act as a large-scale lab for novel collaboration and software engineering techniques. Tool vendors might be willing to integrate their technology in return for widespread usage and name recognition, and the rest of us might finally break free of the anachronisms of mailing lists and other 1980s-era solutions.

Furthermore, the site could be made to emit statistical data for open source research. Most academic papers in the field already look at sourceforge anyway, if they had a way to get better data, they might actually arrive at some useful conclusions, including reuse patterns, social network analysis, and many more. Such a site could therefore be a down payment on discovering the finer points of peer production, slated to become ever more important in the larger economy.

back to amateur science?

ben goertzel has some thoughts on how academic papers are stuffed with irrelevant filling, and how this impedes real progress:

what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important — blah, blah, blah, blah, blah…. A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!

I’m writing some books describing the Novamente AI system — one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don’t follow all the rules of academic writing — for instance, the whole 600 page book has a reference list no longer than I’ve seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!

the lack of risk-taking is particularly evident in computer science:

Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy “toy problems” that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard “toy problem.” As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple “toy” test problems that are popular with journal referees….

his first scenario makes me wonder if amateur scientists could again make meaningful contributions to research, combined with a wiki-like process that (hopefully) would identify promising directions better than today’s peer reviews:

And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don’t bother to ram it through the publication mill … or, generate new knowledge at the rate that’s acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyzes rather than doing truly productive creative research.

backward or forward?

joel spolsky wrote an excellent article how microsoft shifted from making sure all applications continue to run on new os releases to rewriting everything.

Raymond Chen writes, “I get particularly furious when people accuse Microsoft of maliciously breaking applications during OS upgrades. If any application failed to run on Windows 95, I took it as a personal failure. I spent many sleepless nights fixing bugs in third-party programs just so they could keep running on Windows 95.”

Inside Microsoft, the MSDN Magazine Camp has won the battle. The first big win was making Visual Basic.NET not backwards-compatible with VB 6.0. This was literally the first time in living memory that when you bought an upgrade to a Microsoft product, your old data (i.e. the code you had written in VB6) could not be imported perfectly and silently. It was the first time a Microsoft upgrade did not respect the work that users did using the previous version of a product.

he goes on to suggest that microsoft is trying to save the rich client by all means necessary:

There’s no way Microsoft is going to allow DHTML to get any better than it already is: it’s just too dangerous to their core business, the rich client. The big meme at Microsoft these days is: “Microsoft is betting the company on the rich client.” You’ll see that somewhere in every slide presentation about Longhorn.

fusing math and art

Professor wins

A 22-year-old MIT professor whose work fuses art, science, work and play is the recipient of a $500,000 MacArthur Fellowship, commonly known as the genius grant.
Assistant Professor Erik Demaine of electrical engineering and computer science – who last month was called one of the most brilliant scientists in America by Popular Science magazine – is one of the youngest people ever selected for the fellowship and the youngest of the 24 named this year.

interesting combination of approaches. there is so much to explore here..

IEEE hardon

The Magnifying Transmitter produced thunder which was heard from his lab as far away as Cripple Creek. He becomes the first man to create electrical effects on the scale of lightning by creating bolts forty two meters in length. People near the lab would observe sparks emitting from the ground to their feet and through their shoes. Some people observed electrical spark from the fire hydrants. The area around the laboratory would glow with a plasmic blue corona. One of Tesla’s experiment with the Magnifying Transmitter destroyed Colorado Springs Electric Company’s generator, by back feeding the city’s power generators, and blacked out the city.

very nice account of the life of “the inventor of the 20th century”, nicola tesla.

slower and sharper

two of my favorite diversions recently got a boost through new technologies: snowboarding and photography.

getting a grip
For skis, a network of electrodes embedded in each ski base will apply an electric field to the ski-ice or ski-snow interface. This low-frequency electric field will cause ice and snow to stick to the ski base, increasing friction and limiting the speed of the skier. If any skiers out there care to go faster, just increase the frequency. A high-frequency electric field applied at the ski base has an opposite effect as it melts snow and ice just enough to create the same thin, lubricating layer of water, but without the refreezing/sticking phenomenon.

depth of field

This is an image of inclined crayons from a traditional F/8 imaging system. The depth of field is less than one crayon width. The foreground and background are badly blurred due to misfocus.

After simple color and object independent image processing the final Wavefront Coded� image is formed. This image is sharp and clear over the entire image. Compare to the stopped down image from the traditional system. Wavefront Coding� allows a wide aperture system to deliver both light gathering power and a very large depth of field.

A Wavefront Coded� system differs from a classical digital imaging system in two fundamental ways. First, the light traveling through a Wavefront Coded� lens system does not focus on a specific focal plane. Because of a special surface that is placed in the lens system at the aperture stop, no points of the object are imaged as points on the focal plane. Rather, these points are uniformly blurred over an extended range about the focal plane. This situation is referred to as “encoding” the light passing through the lens system. Another way to describe this effect is to say that the special Wavefront Coded� surface in the lens system changes the ray paths such that each ray (except the axial ray) is deviated differently from the path that it would take in a classical, unaltered lens system and therefore they do not converge at the focal plane.

The second difference found in a Wavefront Coded� system is that the image detected at the detector is not sharp and clear, as discussed above, and thus must be “decoded” by a subsequent digital filtering operation. The image from the digital detector is filtered to produce an image that is sharp and clear, but has non-classical properties such as a depth of field (or depth of focus) that is much greater then that produced by an unaltered classical lens system of the same f number.

why apple is losing the technology edge

mark gonzales (former apple chief for sw) makes a good point why apple has not really been that innovative lately.

But I think the market share discussion is missing the primary calculation that must go on in the exec staff of Apple (and I know if did through the 80s and 90s.) It takes development dollars to keep an OS on the cutting edge, and marketing dollars to explain the differences and advantages. These costs must be recovered, in the end, by sales (thus from customers.)

Market share comes into play when you divide these costs by units. To put some numbers on it, if Apple has 5% share, and Microsoft 90% share, Microsoft can spend 17 times more on R&D than Apple can, and maintain price parity.