it’s good to know my computer is actually just doing that, with near 100% efficiency, instead of being mostly idle. plus, looking at the protein folds and wishing to be able to nudge the search algorithm closer to the lowest-energy state is quite mesmerizing. you also gotta admit that they do have a cool line for their advertising.
ben goertzel has some thoughts on how academic papers are stuffed with irrelevant filling, and how this impedes real progress:
what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important — blah, blah, blah, blah, blah…. A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!
I’m writing some books describing the Novamente AI system — one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don’t follow all the rules of academic writing — for instance, the whole 600 page book has a reference list no longer than I’ve seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!
the lack of risk-taking is particularly evident in computer science:
Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy “toy problems” that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard “toy problem.” As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple “toy” test problems that are popular with journal referees….
his first scenario makes me wonder if amateur scientists could again make meaningful contributions to research, combined with a wiki-like process that (hopefully) would identify promising directions better than today’s peer reviews:
And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don’t bother to ram it through the publication mill … or, generate new knowledge at the rate that’s acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyzes rather than doing truly productive creative research.
stef just finished her master thesis on tendax, a collaborative editor that has an innovative database backend to support multiuser editing and other advanced features. on a less happy note, tendax marks the first software patent of my alma mater. stef was featured in a computerworld article about tendax. by the way, what’s up with the silly image on the tendax homepage?
stefano on the dynamics of academic peer review.
The day that you find a more interesting paper in Citeseer than in any IEEE or ACM e-journal, it’s the day you don’t look back.
But what would happen next? What about the academic symbiosis with the peer review system?
Citeseer uses a Google page-rank-like algorithm for ranking: which is analyzing the properties of the citation network topology to understand which papers are more influential than others. Just like Google does with hyperlinks for web pages, Citeseer does it for bibliographic citations: the result is that peer review is not done by a panel of experts, but by every researcher in the field!!
unfortunately, neither university of zurich nor st. gallen are yet blog-enabled, and thus failed to spread word about a very interesting seminar that took place today. michi attended, and we think collaboration between the theorists and the open source practitioners will continue..
next time, i will try to be there, and blog the session.
Amenable to extensive parallelization, google’s web search application lets different queries run on different processors and, by partitioning the overall index, also lets a single query use multiple processors. to handle this workload, google’s architecture features clusters of more than 15,000 commodity class pcs with fault-tolerant software. this architecture achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers.