.................with apologies to Alistair Cook

Wednesday, 30 July 2008

A lovely, thought provoking and whimsical piece...

Taken in full from the article here at Edge by George Dyson that was turned down by Wired!It reminds me of some of the writings of Neal Stephenson...

dysong151.jpg

ENGINEERS' DREAMS

[Note: although the following story is fiction, all quotations
have been reproduced exactly from historical documents that exist.]

Ed was old enough to remember his first transistor radio—a Zenith Royal 500—back when seven transistors attracted attention at the beach. Soon the Japanese showed up, doing better (and smaller) with six.

By the time Ed turned 65, fifteen billion transistors per second were being produced. Now 68, he had been lured out of retirement when the bidding wars for young engineers (and between them for houses) prompted Google to begin looking for old-timers who already had seven-figure mid-peninsula roofs over their heads and did not require stock options to show up for work. Bits are bits, gates are gates, and logic is logic. A systems engineer from the 1960s was right at home in the bowels of a server farm in Mountain View.

In 1958, fresh out of the Navy, Ed had been assigned to the System Development Corporation in Santa Monica to work on SAGE, the hemispheric air defense network that was completed just as the switch from bombers to missiles made it obsolete. Some two dozen air defense sector command centers, each based around an AN-FSQ-7 (Army Navy Fixed Special eQuipment) computer built by IBM, were housed in windowless buildings armored by six feet of blast-resistant concrete. Fifty-eight thousand vacuum tubes, 170,000 diodes, 3,000 miles of wiring, 500 tons of air-conditioning equipment and a 3000-kilowatt power supply were divided between two identical processors, one running the active system and the other standing by as a “warm” backup, running diagnostic routines. One hundred Air Force officers and personnel were stationed at each command center, trained to follow a pre-rehearsed game plan in the event of enemy attack. Artificial intelligence? The sooner the better, Ed hoped. Only the collective intelligence of computers could save us from the weapons they had helped us to invent.

In 1960, Ed attended a series of meetings with Julian Bigelow, the legendary engineer who had collaborated with Norbert Wiener on anti-aircraft fire control during World War II and with John von Neumann afterwards—developing the first 32 x 32 x 40-bit matrix of random-access memory and the logical architecture that has descended to all computers since. Random-access memory gave machines access to numbers—and gave numbers access to machines.

Bigelow was visiting at RAND and UCLA, where von Neumann (preceded by engineers Gerald Estrin, Jack Rosenberg, and Willis Ware) had been planning to build a new laboratory before cancer brought his trajectory to a halt. Copies of the machine they had built together in Princeton had proliferated as explosively as the Monte Carlo simulations of chain-reacting neutrons hosted by the original 5-kilobyte prototype in 1951. Bigelow, who never expected the design compromises he made in 1946 to survive for sixty years, questioned the central dogma of digital computing: that without programmers, computers cannot compute. He viewed processors as organisms that digest code and produce results, consuming instructions so fast that iterative, recursive processes are the only way that humans are able to generate instructions fast enough to keep up. "Highly recursive, conditional and repetitive routines are used because they are notationally efficient (but not necessarily unique) as descriptions of underlying processes," he explained. Strictly sequential processing, and strictly numerical addressing impose severe restrictions on the abilities of computers, and Bigelow speculated from the very beginning about "the possibility of causing various elementary pieces of information situated in the cells of a large array (say, of memory) to enter into a computation process without explicitly generating a coordinate address in 'machine-space' for selecting them out of the array."

At Google, Bigelow's vision was being brought to life. The von Neumann universe was becoming a non-von Neumann universe. Turing machines were being assembled into something that was not a Turing machine. In biology, the instructions say "Do this with that" (without specifying where or when the next available copy of a particular molecule is expected to be found) or "Connect this to that" (without specifying a numerical address). Technology was finally catching up. Here, at last, was the long-awaited revolt against the intolerance of the numerical address matrix and central clock cycle for error and ambiguity in specifying where and when.

The advent of template-based addressing would unleash entirely new forms of digital organisms, beginning with simple and semi-autonomous coded structures, on the level of nucleotides bringing amino acids (or template-based AdWords revenue) back to a collective nest. The search for answers to questions of interest to human beings was only one step along the way.

Google was inverting the von Neumann matrix—by coaxing the matrix into inverting itself. Von Neumann's "Numerical Inverting of Matrices of High Order," published (with Herman Goldstine) in 1947, confirmed his ambition to build a machine that could invert matrices of non-trivial size. A 1950 postscript, "Matrix Inversion by a Monte Carlo Method," describes how a statistical, random-walk procedure credited to von Neumann and Stan Ulam "can be used to invert a class of n-th order matrices with only n2 arithmetic operations in addition to the scanning and discriminating required to play the solitaire game." The aggregate of all our searches for unpredictable (but meaningful) strings of bits, is, in effect, a Monte Carlo process for inverting the matrix that constitutes the World Wide Web.

Ed developed a rapport with the machines that escaped those who had never felt the warmth of a vacuum tube or the texture of a core memory plane. Within three months he was not only troubleshooting the misbehavior of individual data centers, but examining how the archipelago of data centers cooperated—and competed—on a global scale.

In the digital universe that greeted the launch of Google, 99 percent of processing cycles were going to waste. The global computer, for all its powers, was perhaps the least efficient machine that humans had ever built. There was a thin veneer of instructions, and then there was this dark, empty, 99 percent.

What brought Ed to the attention of Google was that he had been in on something referred to as "Mach 9." In the late 1990's, a web of optical fiber had engulfed the world. At peak deployment, in 2000, fiber was being rolled out, globally, at 7,000 miles per hour, or nine times the speed of sound. Mach 9. All the people in the world, talking at once, could never light up all this fiber. But those 15 billion transistors being added every second could. Google had been buying up dark fiber at pennies on the dollar and bringing in those, like Ed, who understood the high-speed optical switching required to connect dark processors to dark fiber. Metazoan codes would do the rest.

As he surveyed the Google Archipelago, Ed was reminded of some handwritten notes that Julian Bigelow had showed him, summarizing a conversation between Stan Ulam and John von Neumann on a bench in Central Park in early November 1952. Ulam and von Neumann had met in secret to discuss the 10-megaton Mike shot, whose detonation at Eniwetok on November 1 would be kept embargoed from the public until 1953. Mike ushered in not only the age of thermonuclear weapons but the age of digital computers, confirming the inaugural calculation that had run on the Princeton machine for a full six weeks. The conversation soon turned from the end of one world to the dawning of the next.

“Given is an actually infinite system of points (the actual infinity is worth stressing because nothing will make sense on a finite no matter how large model),” noted Ulam, who then sketched how he and von Neumann had hypothesized the evolution of Turing-complete universal cellular automata within a digital universe of communicating memory cells. For von Neumann to remain interested, the definitions had to be mathematically precise: “A ‘universal’ automaton is a finite system which given an arbitrary logical proposition in form of (a linear set L) tape attached to it, at say specified points, will produce the true or false answer. (Universal ought to have relative sense: with reference to a class of problems it can decide). The ‘arbitrary’ means really in a class of propositions like Turing's—or smaller or bigger.”

“An organism (any reason to be afraid of this term yet?) is a universal automaton which produces other automata like it in space which is inert or only ‘randomly activated’ around it,” Ulam’s notes continued. “This ‘universality’ is probably necessary to organize or resist organization by other automata?” he asked, parenthetically, before outlining a mathematical formulation of the evolution of such organisms into metazoan forms. In the end he acknowledged that a stochastic, rather than deterministic, model might have to be invoked, which, “unfortunately, would have to involve an enormous amount of probabilistic superstructure to the outlined theory. I think it should probably be omitted unless it involves the crux of the generation and evolution problem—which it might?”

The universal machines now proliferating fastest in the digital universe are virtual machines—not simply Turing machines, but Turing-Ulam machines. They exist as precisely-defined entities in the Von Neumann universe, but have no fixed existence in ours. Sure, thought Ed, they are merely doing the low-level digital housekeeping that does not require dedicated physical machines. But Ed knew this was the beginning of something big. Google (both directly and indirectly) was breeding huge numbers of Turing-Ulam machines. They were proliferating so fast that real machines were having trouble keeping up.

Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). Ed's job was to balance the resulting loads.

When Ed examined the traffic, he realized that Google was doing more than mapping the digital universe. Google doesn't merely link or point to data. It moves data around. Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else.

In the seclusion of the server room, Ed's thoughts drifted back to the second floor communications center that linked the two hemispheres of SAGE's ANFSQ7 brain. "Are you awake? Yes, now go back to sleep!" was repeated over and over, just to verify that the system was on the alert.

SAGE's one million lines of code were near the limit of a system whose behavior could be predicted from one cycle to the next. Ed was reminded of cybernetician W. Ross Ashby's "Law of Requisite Variety": that any effective control system has to be as complex as the system it controls. This was the paradox of artificial intelligence: any system simple enough to be understandable will not be complicated enough to behave intelligently; and any system complicated enough to behave intelligently will not be simple enough to understand. Some held out hope that the path to artificial intelligence could be found through the human brain: trace the pattern of connections into a large enough computer, and you would end up re-creating mind.

Alan Turing's suggestion, to build a disorganized machine with the curiosity of a child, made more sense. Eventually, "interference would no longer be necessary, and the machine would have ‘grown up’." This was Google's approach. Harvest all the data in the world, rendering all available answers accessible to all possible questions, and then reinforce the meaningful associations while letting the meaningless ones die out. Since, by diagonal argument in the scale of possible infinities, there will always be more questions than answers, it is better to start by collecting the answers, and then find the questions, rather than the other way around.

And why trace the connections in the brain of one individual when you can trace the connections in the mind of the entire species at once? Are we searching Google, or is Google searching us?

Google's data centers—windowless, but without the blast protection—were the direct descendants of SAGE. It wasn't just the hum of air conditioning and warm racks of electronics that reminded Ed of 1958. The problem Ed faced was similar—how to balance the side that was awake with the side that was asleep. For SAGE, this was simple—the two hemispheres were on opposite sides of the same building—whereas Google's hemispheres were unevenly distributed from moment to moment throughout a network that spanned the globe.

Nobody understood this, not even Ed. The connections between data centers were so adaptable that you could not predict, from one moment to the next, whether a given part of the Googleverse was "asleep" or "awake." More computation was occurring while "asleep," since the system was free to run at its own self-optimizing pace rather that wait for outside search requests.

Unstable oscillations had begun appearing, and occasionally triggered overload alerts. Responding to the alarms, Ed finally did what any engineer of his generation would do: he went home, got a good night's sleep, and brought his old Tektronix oscilloscope with him to work.

He descended into one of the basement switching centers and started tapping into the switching nodes. In the digital age, everything had gone to megahertz, and now gigahertz, and the analog oscilloscope had been left behind. But if you had an odd wave-form that needed puzzling over, this was the tool to use.

What if analog was not really over? What if the digital matrix had now become the substrate upon which new, analog structures were starting to grow? Pulse-frequency coding, whether in a nervous system or a probabilistic search-engine, is based on statistical accounting for what connects where, and how frequently connections are made between given points. PageRank for neurons is one way to describe the working architecture of the brain. As von Neumann explained in 1948: "A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse." Ulam had summed it up: “What makes you so sure that mathematical logic corresponds to the way we think?”

As Ed traced the low-frequency harmonic oscillations that reverberated below the digital horizon, he lost track of time. He realized he was hungry and made his way upstairs. The oscilloscope traces had left ghosts in his vision, like the image that lingers for a few seconds when a cathode-ray tube is shut down. As he sat down to a bowl of noodles in the cafeteria, he realized that he had seen these 13-hertz cycles, clear off the scale of anything in the digital world, before.

It was 1965, and he had been assigned, under a contract with Stanford University, to physiologist William C. Dement, who was setting up a lab to do sleep research. Dement, who had been in on the discovery of what became known as REM sleep, was investigating newborn infants, who spend much of their time in dreaming sleep. Dement hypothesized that dreaming was an essential step in the initialization of the brain. Eventually, if all goes well, awareness of reality evolves from the internal dream—a state we periodically return to during sleep. Ed had helped with setting up Dement's lab, and had spent many late nights getting the electroencephalographs fine-tuned. He had lost track of Bill Dement over the years. But he remembered the title of the article in SCIENCE that Dement had sent to him, inscribed "to Ed, with thanks from Bill." It was "Ontogenetic Development of the Human Sleep-Dream Cycle. The prime role of ‘dreaming sleep’ in early life may be in the development of the central nervous system."

Ed cleared his tray and walked outside. In a few minutes he was at the edge of the Google campus, and kept walking, in the dark, towards Moffett Field. He tried not to think. As he started walking back, into the pre-dawn twilight, with the hint of a breeze bringing the scent of the unseen salt marshes to the east, he looked up at the sky, trying to clear the details of the network traffic logs and the oscilloscope traces from his mind.

For 400 years, we have been waiting for machines to begin to think.

"We've been asking the wrong question," he whispered under his breath.

They would start to dream first.

As Slashdot put it "Are we searching Google, or is Google searching us?"

Disgusting, craven and plain wrong! IOC Admits Internet Censorship Deal With China

"Dave writes 'BEIJING (Reuters) — Some International Olympic Committee officials cut a deal to let China block sensitive websites despite promises of unrestricted access, a senior IOC official admitted on Wednesday. Persistent pollution fears and China's concerns about security in Tibet also remained problems for organizers nine days before the Games begin. China had committed to providing media with the same freedom to report on the Games as they enjoyed at previous Olympics, but journalists have this week complained of finding access to sites deemed sensitive to its communist leadership blocked. 'I regret that it now appears BOCOG has announced that there will be limitations on website access during Games time,' IOC press chief Kevan Gosper said, referring to Beijing's Olympic organizers. 'I also now understand that some IOC officials negotiated with the Chinese that some sensitive sites would be blocked on the basis they were not considered Games related,' he said.' But yet somehow the mainstream media will ignore this because the Olympics are patriotic or something

(Via Slashdot.)

Hilarious! The biter, bit - DNS Attack Writer a Victim of His Own Creation

"BobB writes 'HD Moore has been owned. Moore, the creator of the popular Metasploit hacking toolkit, has become the victim of a computer attack. It happened on Tuesday morning, when Moore's company, BreakingPoint, had some of its Internet traffic redirected to a fake Google page that was being run by a scammer. According to Moore, the hacker was able to do this by launching what's known as a cache poisoning attack on a DNS server on AT&T's network that was serving the Austin, Texas, area. One of BreakingPoint's servers was forwarding DNS (Domain Name System) traffic to the AT&T server, so when it was compromised, so was HD Moore's company.'

(Via Slashdot.)

Tuesday, 29 July 2008

That'll be an "ouch" then...

At $109,000 apiece and only #6 off the production line, this owner (whose fault it seems to have been!) won't be a happy person:

volkovteslawreck3.jpg

Cuil isn't Cool....

....a new search engine went live yesterday. The media excitement they'd managed to generate (as one of the founders is an ex-Google employee) and that they've managed to get the VC types to stump up millions of dollars to fund this, soon fell apart when (a) their servers melted under the strain and (b) people found that their results are, well, pants! My advice is not to bother. The name is stupid as well and they're soooooooo not "cool"...

Monday, 28 July 2008

Tom Vanderbilt's <cite>Why We Drive the Way We Do<cite> Unlocks How to Unclog Traffic

Tom Vanderbilt's Why We Drive the Way We Do Unlocks How to Unclog Traffic: "


Driving down a New Jersey highway three years ago, Tom Vanderbilt decided to stop being a goody-goody. He fought the urge to merge at the first indication that his lane was ending and rode it right to the pinch point, wedging his way in front of a furious driver at the last second. Racked with moral misgivings, he eventually looked into the science of merging and discovered salvation in high math, which proves he made the right choice — and not just for his own time-saving benefit, but for humankind (or at least commuter-kind — the seemingly selfish strategy keeps traffic moving faster for all). 'It doesn't have to be an ethics problem,' Vanderbilt says. 'It's really a system-optimization issue.'




That's when he decided to write Traffic: Why We Drive the Way We Do (And What It Says About Us). As part of his research, Vanderbilt set up Google Alerts to notify him about traffic-related news. 'Half were about road traffic, and half were about Internet traffic,' he says. Unfortunately, drivers have a major disadvantage relative to data packets flowing across the Web: Humans think too much. Packets go where they're told rather than relying on the scraps of incomplete intelligence and 'superstition,' as Vanderbilt calls it, that humans use when choosing how to get from point A to point B.



Drivers make shortsighted decisions based on limited information — a combination of what they can see and traffic reports that, even at their most sophisticated, are an average of 3.7 minutes old. At 60 mph, that's a 4-mile blind spot. 'The fundamental problem,' Vanderbilt says, 'is that you've got drivers who make user-optimal rather than system-optimal decisions' — a classic case of Nash equilibrium, in which each participant, based on what they believe to be others' strategies, sees no benefit in changing their own.



Those who seek a more efficient traffic solution use not only network topology and queuing theory but psychology and game theory, too. A typical puzzle: Waiting for an on-ramp metering light — a mild and remarkably effective congestion-control measure — has been proven to rankle drivers more than merging directly into a traffic jam. 'What bothers people is that they can see traffic flowing smoothly,' Vanderbilt says. 'So they think, 'Why should I wait?' They tend not to accept that the traffic is flowing smoothly precisely because of the metering light.'



What about faster, better traffic info? One new technology, Dash Navigation's GPS-based social networking system, may be a step toward dynamic traffic routing, but only for those who have Dash's device, and maybe only temporarily. Suppose Dash were to become the hit its backers — including VC firms Sequoia Capital and Kleiner Perkins Caufield & Byers — hope it will. As soon as drivers have all the information about which routes are congested, they'll divert to others that are clear. But if enough people do this at roughly the same time, the clear routes become jammed. Vanderbilt laments this as the inevitable 'death of the shortcut.'



The obvious answer, then, is to make the road network as efficient as the information superhighway. Make the packets (cars) dumb and able to take marching orders from traffic routing nodes. The obvious problem with that: No self-respecting, freedom-loving American would stand for it.
(Via Wired News.)

Sunday, 27 July 2008

No Secret Software!

No Secret Software!: "


For my money, Christine
Peterson
offered the most important message I heard at OSCON.
Way back when, she invented the term ‘Open Source’ and, if we get behind it,
which we should, the No Secret Software! rallying cry could be as big
or bigger.


It’s simple: when data is gathered and used for the people as part of
civic processes (voting is a good example), processing it using secret
software, especially if it’s a private-sector secret, should be totally
out of bounds
.


christine peterson at oscon 2008

This is very closely aligned to the struggle for the use of open-source
software where appropriate, but ‘Open Source’ is a term of art and is
associated with ill-groomed inarticulate geeks who have odd opinions about lots of
things. ‘Secret software’ is a term that anyone can understand instantly, and
it sounds creepy and dangerous; because secret software in the public sector
is creepy and dangerous, and simply shouldn’t be allowed.


Ms Peterson gently chided the Open-Source community for having let the
e-voting debacle happen in the first place; it was foreseeable and should have
been headed off. I think she has a point.


Her aim in the OSCON talk (which is
online at blip.tv) was to give
warning of similar battles looming in the realm of security data, which is
already vast and is growing fast. It will be gathered by our
governments and will be put to lots of uses involving lots of
software and storage.


We will get better security and simultaneously less potential for abuse if
we rule out the use of secret software. So, let’s do that.


It’s not enough to be right about an important issue. It’s vital to frame
our opinions and beliefs in language that’s simple and believable and whose
meaning is clear and self-evident.


I think we’re in Ms Peterson’s debt for giving us this important rhetorical
tool. I’m going to start putting it to use whenever these issues come up in
the civic sector. I think if we all get behind this, we’ll strengthen our
position in some debates that really matter, and we’ll be better citizens.


"



(Via ongoing.)

Saturday, 26 July 2008

Pot Growers - watch out for drug busts after Google comes calling...

"nathan halverson writes 'Google recently launched Street View coverage in Sonoma and Mendocino counties — big pot growing counties. And while they hardly covered the area's biggest city, Santa Rosa, they canvassed many of the rural areas known for growing pot. I found at least one instance where they drove well onto private property, past a gate and no trespassing sign, and took photographs. I didn't spend a whole lot of time looking, but someone is likely to find some pot plants captured on Street View. That could cause big problems for residents. Because while growing a substantial amount of pot is legal in Mendocino and Sonoma County under state law, it's highly illegal under federal law and would be grounds for a federal raid.'

(Via Slashdot.)

Cloud computing and Amazon' S3 down-time - an analysis


Amazon Web Services » Service Health Dashboard » Amazon S3 Availability Event: July 20, 2008
Amazon S3 Availability Event: July 20, 2008

We wanted to provide some additional detail about the problem we experienced on Sunday, July 20th.

At 8:40am PDT, error rates in all Amazon S3 datacenters began to quickly climb and our alarms went off. By 8:50am PDT, error rates were significantly elevated and very few requests were completing successfully. By 8:55am PDT, we had multiple engineers engaged and investigating the issue. Our alarms pointed at problems processing customer requests in multiple places within the system and across multiple data centers. While we began investigating several possible causes, we tried to restore system health by taking several actions to reduce system load. We reduced system load in several stages, but it had no impact on restoring system health.

At 9:41am PDT, we determined that servers within Amazon S3 were having problems communicating with each other. As background information, Amazon S3 uses a gossip protocol to quickly spread server state information throughout the system. This allows Amazon S3 to quickly route around failed or unreachable servers, among other things. When one server connects to another as part of processing a customer's request, it starts by gossiping about the system state. Only after gossip is completed will the server send along the information related to the customer request. On Sunday, we saw a large number of servers that were spending almost all of their time gossiping and a disproportionate amount of servers that had failed while gossiping. With a large number of servers gossiping and failing while gossiping, Amazon S3 wasn't able to successfully process many customer requests.

At 10:32am PDT, after exploring several options, we determined that we needed to shut down all communication between Amazon S3 servers, shut down all components used for request processing, clear the system's state, and then reactivate the request processing components. By 11:05am PDT, all server-to-server communication was stopped, request processing components shut down, and the system's state cleared. By 2:20pm PDT, we'd restored internal communication between all Amazon S3 servers and began reactivating request processing components concurrently in both the US and EU.

At 2:57pm PDT, Amazon S3's EU location began successfully completing customer requests. The EU location came back online before the US because there are fewer servers in the EU. By 3:10pm PDT, request rates and error rates in the EU had returned to normal. At 4:02pm PDT, Amazon S3's US location began successfully completing customer requests, and request rates and error rates had returned to normal by 4:58pm PDT.

We've now determined that message corruption was the cause of the server-to-server communication problems. More specifically, we found that there were a handful of messages on Sunday morning that had a single bit corrupted such that the message was still intelligible, but the system state information was incorrect. We use MD5 checksums throughout the system, for example, to prevent, detect, and recover from corruption that can occur during receipt, storage, and retrieval of customers' objects. However, we didn't have the same protection in place to detect whether this particular internal state information had been corrupted. As a result, when the corruption occurred, we didn't detect it and it spread throughout the system causing the symptoms described above. We hadn't encountered server-to-server communication issues of this scale before and, as a result, it took some time during the event to diagnose and recover from it.

During our post-mortem analysis we've spent quite a bit of time evaluating what happened, how quickly we were able to respond and recover, and what we could do to prevent other unusual circumstances like this from having system-wide impacts. Here are the actions that we're taking: (a) we've deployed several changes to Amazon S3 that significantly reduce the amount of time required to completely restore system-wide state and restart customer request processing; (b) we've deployed a change to how Amazon S3 gossips about failed servers that reduces the amount of gossip and helps prevent the behavior we experienced on Sunday; (c) we've added additional monitoring and alarming of gossip rates and failures; and, (d) we're adding checksums to proactively detect corruption of system state messages so we can log any such messages and then reject them.

Finally, we want you to know that we are passionate about providing the best storage service at the best price so that you can spend more time thinking about your business rather than having to focus on building scalable, reliable infrastructure. Though we're proud of our operational performance in operating Amazon S3 for almost 2.5 years, we know that any downtime is unacceptable and we won't be satisfied until performance is statistically indistinguishable from perfect.

Sincerely,

The Amazon S3 Team


Conditions of Use | Privacy Notice © 2006-2008 Amazon Web Services LLC or its affiliates. All rights reserved.

Wednesday, 23 July 2008

That didn't take long! Attack Code Published For DNS Vulnerability

Patch now people :)

And I see that RoadRunner/BrightHouse haven't yet, so OpenDNS is my friend.

get_Rootin writes 'That didn't take long. ZDNet is reporting that HD Moore has released exploit code here for Dan Kaminsky's DNS cache poisioning vulnerability into the point-and-click Metasploit attack tool. From the article: 'This exploit caches a single malicious host entry into the target nameserver. By causing the target nameserver to query for random hostnames at the target domain, the attacker can spoof a response to the target server including an answer for the query, an authority server record, and an additional record for that server, causing target nameserver to insert the additional record into the cache.' (Via Slashdot.)

Kaminsky on How He Discovered the DNS Flaw

If you (or your ISP) haven't patched your name-servers yet, then now might be a VERY good time to do so. For why, take a look at Dan' interview below. And to see if your network (or that of your ISP) is vulnerable, take a look at the test here. And my ISP Road Runner (a part of Time warner) still hadn't patched as of 10 minutes ago :(

Another command line version is detailed here courtesy of DNS-OARC as follows:

Yesterday's announcement of CERT VU#800113 makes it clear that resolvers should use random source source ports when sending queries. Here at OARC, we've crafted a special DNS name and server that you can query to learn whether or not your own resolver is using random ports. Use a DNS query tool such as dig to ask for the TXT record of porttest.dns-oarc.net:

$ dig +short porttest.dns-oarc.net TXT

You should get back an answer that looks like this:

z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net.
"169.254.0.1 is FAIR: 26 queries in 0.1 seconds from 25 ports with std dev 3843.00"

Your resolver's randomness will be rated either GOOD, FAIR, or POOR, based on the standard deviation of observed source ports. In order to receive a GOOD rating, the standard deviation must be at least 10,000. For FAIR it must be at least 3,000. Anything less is POOR. The best standard deviation you can expect to see from 26 queries is in the 18,000-20,000 range.

DNS records used in this test are given 60 second TTLs. To repeat the test you should wait at least 60 seconds.

Note that you can tell dig to test a specific resolver with an @-argument:

$ dig @4.2.2.3 +short porttest.dns-oarc.net TXT

kaminsky_by_quinn.jpg

Dan Kaminsky is understandably swamped today, given the unexpected early release of information about the critical DNS flaw he discovered that potentially affects the security of every website on the internet.
But he found some time to speak with Threat Level about how he discovered the vulnerability that has system administrators scrambling to patch before an exploit -- which is expected to go public by the end of today -- is widely available.
Kaminsky discovered the bug by chance about six months ago, which he promptly disclosed to people in the DNS community. At the end of March, an emergency summit was convened at Microsoft's headquarters, gathering 16 people from around the world to discuss how to address the problem.
On July 8, Kaminsky held a press conference announcing a multi-vendor patch and urging DNS server owners to upgrade their software with the patch immediately. But he declined to disclose details of the bug until next month, when he plans to deliver a talk about the flaw at the Black Hat Hacker Conference. Until then, Kaminsky asked researchers not to speculate about the bug, to avoid giving hackers information that could help them exploit it.
Thirteen days after that press conference, however, the security firm Matasano inadvertently released details about the bug on a blog post that the company quickly removed, but has been re-posted elsewhere.
I spoke with Kaminsky about that disclosure, among other issues.
Threat Level: So how pissed off are you?
Dan Kaminsky: (Laughs) I am not the important part here. The important thing is that people patch.
I have to be blunt. The drama is fun and interesting and cool, but it's a distraction. (The important thing is that) it's a really bad bug that really impacts every website you use and your readers use. It impacts whether or not readers are even going to see the article you're about to write. Now I could get into a big fight with lots of people ... and that might happen at some point! But it's a distraction from right now, which is, you know, we did good. We got 13 days of a patch being out without the bug being public. That's unprecedented. I'm pretty proud of at least 13 days. I would have liked 30, but I got 13 ... But the circumstances of how it went public are not what's important today. There will be a time for that, just not now. What is important now is people need to patch.
TL: There were a lot of people who balked at patching because they didn't know the details of the bug.
DK: Well you know, there were people who said, 'Dan, I wish I could patch but I don't know the bug and I can't get the resources I need to patch it.' Well you know the bug now.
You know, Verizon Business has a blog entry where they say that the greatest short-term risk from patching DNS was from the patch itself, from changing such a core and essential element to their systems. I know this. I was a network engineer before I was a security engineer. So that's why we took such extraordinary lengths to try to get people as much time as possible (to patch their systems). There's just a lot of complexity in doing something on this scale. This is something I think a lot of people don’t realize. It was difficult to get the patches even written, let alone get them all released on a single day.
But let me tell you, the complete lack of whining from the (DNS software) vendors ... if I could have gotten as little whining from the security (professionals) ... no I'm not going to say that. It's so tempting! I'm simply going to say this in positive terms. I wish everybody could be as cooperative and understanding and as helpful as Microsoft and ISC (the Internet Systems Consortium) and Cisco and everyone else was who worked so hard to get customers what they needed to protect our networks.
TL: How did you come across the bug? You said in the press conference on July 8 that you hadn't even been looking for this. So what were you doing when you found the bug?
DK: If you look at the history of my talks ... one year I had done some stuff on triangular routing. It's where you have multiple hosts that are all trying to host the same data and you want the fastest one to host it.
So I'm working at this, and I'm wondering if I can, like, use DNS races to figure out the fastest name servers to provide data. I started thinking about this trick I had done (before) with CNAMES -- they're an alias in DNS.
I realized I could look up a random name, and then whichever random name won would override the record for www.mywebsite.com. Essentially, I was looking for a faster way to host data on the internet and I remembered I have ways of overwriting which record the name server uses for 'www' by looking up something else and having it overwrite. And then I thought about that for a second. Wait, it's going to overwrite whatever is wwww.mywebsite.com! This kind of has security implications! Because if it works you can get around all of our DNS cache-poisoning protections. Then it worked!
I first tried it about six months ago. It took a couple of days to get working. I wrote it in Python to begin with and it was pretty slow. Then I rewrote it in C and it wasn’t slow anymore. It was a couple of seconds. That's when I realized I had a problem.+
TL: Then what did you do?
DK: I looked at it for a while, talked to a couple of really, really trusted people about it. Eventually I went to Paul Vixie (of ISC).
I've been ... looking at other issues with DNS for some time and I had already been working with Vixie on some of the fallout from last year's talk, when I was talking about DNS re-binding attacks. So I go to Paul and I say, Listen, we've got a bigger problem. And I send him the code and the packets and the details. And then there's that moment of, Yeah, we do have a problem.
Paul's an institution in the DNS realm and he basically goes ahead and contacts everybody and brings in Florian Weimer from Germany and brings in representatives from Cisco, Open DNS ... And we start talking on (an e-mail) thread for a couple of weeks about what the implications of this are. A couple of weeks in we realized we should probably have a summit and we should probably have it soon. So I asked Microsoft if they'd provide hosting and they absolutely agreed. On February 20 I had mailed Paul Vixie. And on March 31, 16 people from around the world were in Microsoft headquarters.
When I say there was no b.s. from the vendors, there was just no b.s. from the vendors. They got it. They understood they were in trouble. We skipped past the entire 'Is it really a bug?' phase, that's still continuing in public (discussions).
TL: But you’ve got to understand why people said that. You acknowledged that in not disclosing the details, you opened yourself up to people being skeptical about the bug.
DK: People are allowed to be very, very skeptical. But, you know, don't be so skeptical that you're telling people to not patch.
This is a really bad bug. And for everyone who (says), Oh, I knew about this years ago . . . no, you didn't. Stop pretending you did. Because every time you say it, another network doesn't patch (their system).
This (attack takes) ten seconds to hijack the net. . . . Unless you like other people reading your e-mail, go patch. If you want to actually see Google and Yahoo and MySpace and Facebook and the entire web, if you actually want to see the correct web sites, go patch. The debate about whether this bug is new or old is ultimately useless. In ten seconds, the ISP DNS servers are taken over.
TL: It was kind of pie-in-the-sky to think that everyone was going to sit on their hands for 30 days and not post information about what they thought the bug was wasn't it?
DK: You know, a lot of people did. The guys who were actually smart enough to find the bug (didn't disclose it). The people who have been complaining have been people who couldn’t figure it out.
The people who could figure it out e-mailed me privately. And that says a lot. . . . The people who were good enough to figure out the bug by themselves I am incredibly gracious and appreciative of them for mailing me and helping me get the thirteen days that I got.
TL: How quickly did you get the first response from someone who discovered what the bug was?
DK: It was a couple of days.
TL: How far along are people in patching the DNS servers? Do you know how many have been patched?
DK: Way more than I ever would have hoped, (but) less than I would have liked. We were in the high double digits (in terms of percentages). We were getting some pretty good pickup on this patch. The last time I looked at people who were testing against my site it was somewhere in 30 to 40 percent . . . people who were going to my site to test their name servers.
There are a couple million name servers on the internet. There are many million more that are not physically on the internet but are behind firewalls. Ultimately any name server that is not patched is vulnerable and will probably eventually be attacked. The attack is just too good and too easy. My grandma's going to be in the audience (at Black Hat). My grandma's going to understand the bug."

(Via Wired News.)

Today's Art of The Day from Google...

I've been lucky enough to visit and see this masterpiece in situ and thought I'd share it with y'all out there :)

Michelangelo Buonarroti:
The Prophet Jeremiah
fresco, 1508-1512
Sistine Chapel, Vatican City

p.txt.jpeg

Think Your Time on Earth Is Short? Try Being a Muon!

A lovely visual showing the shortest and longest lived "things" in the universe at Wired magazine:

screenshot.jpg

Fascinating! World's Oldest Bible Going Online

main_image.jpg

"99luftballon writes 'The British Museum is putting online the remaining fragments of the world's oldest Bible. The Codex Sinaiticus dates to the fourth century BCE and was discovered in the 19th century. Very few people have seen it due to its fragile state — that and the fact that parts of it are in collections scattered across the globe. It'll give scholars and those interested their first chance to take a look. However, I've got a feeling that some people won't be happy to see it online, since it makes no mention of the resurrection, which is a central part of Christian belief.'On Thursday the Book of Psalms and the Gospel According to Mark will go live at the Codex Sinaiticus site. The plan is to have all the material up, with translations and commentaries, a year from now.

(Via Slashdot.)

Tuesday, 22 July 2008

The Ghost in Your Machine: IPv6 Gateway to Hackers

"It may be years before the new internet protocol IPv6 takes over from the current IPv4, but a security researcher is warning that many systems -- corporate and personal -- are already open to attack through channels that have been enabled on their machines to support IPv6 traffic.

ipv4_vs_ipv6.jpg

(Via Wired News.)

Thursday, 17 July 2008

"The internet treats censorship as damage and routes around it" - Cuba

Is an old saw (from activist John Gilmore) and nowhere is this more appropriate than in this article from WikiLeaks (an excellent site which everybody should bookmark) about Cuba' ongoing attempt to get better (read: faster)Internet access as they're still embargoed by the US from accessing any existing links:

Cuba-cable.jpg

Cuba to work around US embargo via undersea cable to Venezuela
JULIAN ASSANGE (investigative editor)
Wednesday July 15, 2008
From Santiago de Cuba to La Guaira : Cuba will be connected to the Internet by 2010

Page one of the Cuba-Venezuela confidential contract annex

According to the Vice Minister of Telecommunication, Boris Moreno, the government is unable to offer Cubans comprehensive Internet for their new Pcs because the American embargo prevents it from getting service directly from the United States nearby through underwater cables. Instead, Cuba gets Internet service through less reliable satellite connections, usually from faraway countries including Italy and Canada.

— Cuba blames US for Internet restrictions, AP Newswire 16 May 2008[1]
Documents released by Wikileaks reveal that Cuba and Venezuela signed a confidential contract in 2006 to lay an undersea fibre-optic cable that bypasses the United States. The cable is to be completed by 2010.
The contract between the two countries, which has been independently verified, adds weight to Cuban statements that the United States economic embargo of the island has forced it to rely on slow and expensive satellite links for Internet connectivity. Cuba is situated a mere 120 kilometres off the coast of Florida. The proposed 1,500 kilometre cable will connect Cuba, Jamaica, Haiti and Trinidad to the rest of the world via La Guaira, Venezuela.
Carrying out the work are CVG Telecom (CorporaciĆ³n Venezolana de Guyana) and ETC (Empresa de Telecomunicaciones de Cuba).
The leaked documents have technical details and pictures of the cable, maps, and systems to be used, parties signing the agreement, terms and conditions, costs, and a schedule of charges and compromises. The connection allows for the transmission of data, video and voice (VoIP). According to the contract, the agreement is designed to build a relationship of "strategic value" which will permit Cuba and Venezuela to, among other matters:
Increase interchange between the two governments.
Foster science, cultural and social development.
Increase the volume and variety of relationships between country members of ALBA (Bolivarian Alternative for America) and the South American MERCOSUR trading block.
Help serve the increasing demand of commercial traffic between Cuba, Venezuela and the rest of the world.
The contract parties state that given the diversity of foreign affairs, they wish to build a new international order, multi-polar, based in sustainability, equity and common good and that an international cable with maximum security protected by international organizations (ITU/ICPC) is crucial.
The documents disclose plans to separate commercial traffic and governmental traffic upon data arrival.
See
Fibra optica entre Venezuela y Cuba 2006 - full documents obtained by Wikileaks
Cuba-Venezuela Communications Project On the Move - June 9, 2008 Cuban government statement

Wednesday, 16 July 2008

What a load of old $%#@!&*! Grawlix

Grawlix: "

‘A string of typographical symbols used (especially in comic strips) to represent an obscenity or swear word.’


(Via Daring Fireball.)

School-friends

Via an email from Friends Reunited, a guy I last saw not many years after we left DGSB back in the 70s, Derek Waldron, got in touch. Nice to hear from him. And this shot is of his pride and joy, some gratuitous car porn AKA a Diablo GT:

BW.jpg

Tuesday, 15 July 2008

Only in America! SCO's Lawsuit Gets Even Crazier

"I Don't Believe in Imaginary Property writes 'With SCO in Chapter 11 bankruptcy and there being little to read other than status reports and the boring financial details of how the company is wasting its last few dollars, one could be excused for thinking the SCO lawsuits had lost their zip. But things just got a bit more interesting. Jonathan Lee Riches has asked the court to take over. Yes, the man also known as inmate #40948-018 is now bringing his legal experience to the table, having previously filed pro se lawsuits against such entities as Michael Vick, Michael Jordan, Mickey Mantle, the Lincoln Memorial, the Thirteen Tribes of Israel, 'Various Buddhist Monks,' Mein Kampf, Denny's, George W. Bush, the Soviet Gulag Archipelago, Bellevue Hospital, Iran's Evin Prison, Auschwitz, and Plato. In his hand-written pro se motion (PDF), he asks to intervene as Plaintiff pursuant to FRCP 24(a)(2). As best anyone can read the motion, it appears that he offered Novell some 'royalty payments' and they refused them, so he wants to protect his UnixWare rights. He also claims to have proof of SCO's claims, but he wants take over part of the case via FRCP 24 because SCO isn't competent, and allegedly he could do a better job. To be fair, between him and Darl, it's something of a toss-up.'

(Via Slashdot.)

Ooops :) Disgruntled Engineer Hijacks San Francisco's Computer System

UPDATE: his bail has been set at something like 5 million dollars as the city fears if he's freed, he'll really screw things up for them!

"ceswiedler writes 'A disgruntled software engineer has hijacked San Francisco's new multimillion-dollar municipal computer system. When the Department of Technology tried to fire him, he disabled all administrative passwords other than his own. He was taken into custody but has so far refused to provide the password, and the department has yet to regain admin access on their own. They're worried that he or an associate might be able to destroy hundreds of thousands of sensitive documents, including emails, payroll information, and law enforcement documents.'

(Via Slashdot.)

Monday, 14 July 2008

The Clash...

Reggae and rock in an irresistible mix:

Who'd have thought it! Bush lifts offshore drilling ban

"President George W Bush lifts an executive ban on oil drilling in most US coastal waters, and urges Congress to follow suit."
(Via BBC News.)



So, apart from the fact that there's HUGE amounts of un-tested areas of the continental USA that the oil companies own and haven't done anything with and the fact that new sites off-shore would take up to 10 years to complete (really helpful with today's gas prices! not!), what else is new with this special interest President?

Sunday, 13 July 2008

On privacy...

From Ben, AKA "fyngyrz", comes an excellent piece on the meaning of the word privacy:

"What’s the problem?

It has come to my attention that many people feel that privacy is difficult to define. I was quite surprised to encounter this claim, because the nature of privacy seems quite obvious to me. Yet, Professor Daniel Solove of George Washington University Law School says bluntly that the question “What is privacy?” has “long plagued those seeking to develop a theory of privacy and justifications for its legal protection.” Apparently, I’m either quite confused, or I owe it to the world to write down what privacy is. The thing is, I really don’t think I am confused, so I suppose I had best put fingers to keyboard. After all, if I am wrong, I’m sure someone will take a few moments to explain why.


Defining privacy

It is very simple, really: Privacy is defined by the set of social boundaries dealing with access in any one society that we are expected not to cross. How well you respect privacy is essentially whether you elect to cross those boundaries against those expectations.

Such boundaries may be society wide, such as the understanding that we don’t put our hands inside each other’s clothes without permission, or they may be the result of a specific understanding between two individuals, such as a parent’s agreement not to start reading a child’s story until the child is done writing it.

For example, I should not enter your home without your permission; if I do so, I have crossed a well understood social boundary. Doing this is a violation of your privacy. If you lock your home and bar your windows, you are hardening that boundary, but it is still the same boundary. What you have done with it is made the boundary physically more difficult to cross; this does not change either why the boundary exists, or make it socially acceptable for me to enter other people’s homes who have not similarly hardened their domicile. Were you to invite me into your home, you have explicitly dropped that boundary — it does not exist for the duration of the invitation — consequently I am not violating your privacy were I to enter.

If you write a letter, presuming only that it is not addressed to me, I should not read that letter without your explicit permission. Again, this is a well understood social boundary. It is even codified in the 4th amendment of the US constitution:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Think about the wording there. Over two centuries ago, four social boundaries were deemed so fundamentally important to the citizens of this nation that they were used to form the basis of legal boundaries as well — boundaries that were designed to prevent the federal (and later, via the 14th amendment, state) government from having the power to violate these boundaries. There are more boundaries codified in the constitution. For instance, the 3rd amendment prevents the government from forcing a homeowner to give soldiers lodging; there’s that home-entry boundary again.

Digital Privacy

The question has arisen lately as to whether the US government should have the right to read your email and other digital communications. This isn’t so much a question as it is an observation: President George Bush arbitrarily began mining data from telephone conversations, the Internet, financial transaction records and more during his presidency.

Looking at the fourth amendment, it is strikingly clear that the intent of the amendment was to make specific that your communications were private; letters were their form of communications. Looking at the telecommunications laws, it is just as clear that at one time, this was well understood by congress.

I really don’t think you can make an argument that the websites and newsgroups you read, the personal email you send and receive, the instant messages you exchange, are not precisely the type of information the fourth amendment was trying to explicitly codify a boundary to prevent government access barring probable cause and the subsequent issuance of a warrant.

Quite aside from the constitution, the social boundary is obvious: If you write an email, you expect me not to read it unless you wrote it to me. If I do read it, I’ve crossed that boundary, and you will react with a feeling of having been violated. You can harden the boundary any number of ways; you can encrypt, you can use proxy servers to send and receive your communications, you can use steganography to hide your message in an innocuous image; but just like adding locks and bars to homes, this doesn’t in any way say that it is acceptable to violate other people’s communications because they have not done a comparable amount of hardening. The boundary is not in any way different in its nature, only in the degree of effort it will take to cross it. The point is, you’re not supposed to try to cross that boundary. It makes no difference if effort has been extended to harden it, or not.

You can extend the boundary idea to any form of privacy and it will still work. You can also, by comparing the various venues, develop a fine sensibility as to what constitutes a boundary violation. Allow me to demonstrate:

Let us say that a lady elects to wear a skirt. Does this give us the right to look up her skirt? After all, if she didn’t want us looking, she could have hardened the boundary, that is, worn pants, is this not true? But any reasonable person understands the social boundary perfectly well — she is not extending anyone permission to look up her skirt just because she is wearing one.

But what if she is a shoplifter and is hiding merchandise up her skirt? Would this not give us the right to look up her skirt? The answer is, it would if one had knowledge that this was the case.

The constitution calls this “probable cause.” The idea that a lady could hide merchandise under her skirt clearly does not translate into the right to look up all ladies’ skirts — the very idea is ludicrous, is it not?

Yet the US government is telling us that the reason they are justified in looking at everyone’s email and other Internet activity is because these activities could allow illicit activity.

This is precisely the same kind of reasoning we just disposed of with skirts; the only time the government should be looking at any communication is when (a) they have probable cause to think that those communications are of a criminal nature, (b) they have obtained a warrant that (c) specifically describes the communications to be searched. Why? Go read the fourth amendment again — it really couldn’t be any plainer.

So there it is; privacy arises as a consequence of socially understood boundaries relating to access. Such may be a widely understood boundary such as home entry, or a narrow, personal boundary such as you telling your minor child that you will not read their diary without their permission, though you have that right as parent and you have the power and means to do so. It is by understanding what we are expected to do, and how well we subsequently comply with those expectations, that the concept of privacy acquires meaning, and we prevent that disturbing feeling of having had our expectations sundered — our privacy violated." © Ben AKA fyngyrz 2008.

ACLU Files Lawsuit Challenging FISA

"Wired's Threat Level blog reports that the American Civil Liberties Union has filed a lawsuit contesting the constitutionality of the Foreign Intelligence Surveillance Act. Recently passed by both the House and Senate, FISA was signed into law on Thursday by President Bush. The ACLU has fought aspects of FISA in the past. The new complaint (PDF) alleges the following: 'The law challenged here supplies none of the safeguards that the Constitution demands. It permits the government to monitor the communications of U.S. Citizens and residents without identifying the people to be surveilled; without specifying the facilities, places, premises, or property to be monitored; without observing meaningful limitations on the retention, analysis, and dissemination of acquired information; without obtaining individualized warrants based on criminal or foreign intelligence probable cause; and, indeed, without even making prior administrative determinations that the targets of surveillance are foreign agents or connected in any way, however tenuously, to terrorism.'

Read more of this story at Slashdot.

Friday, 11 July 2008

Forget those corporate disclaimers...

...which are probably a waste of time anyway and must have consumed gazillions of trees as they're printed out on those endless chains of replies, despite that Nirvana-like promise oh, so many years ago, of the "paperless office". Just use this one which I find funny:

"This message is provided 'AS IS' without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of accuracy, correct grammar and spelling, lack of vulgarity or adult themes, correct references, absence of viruses and/or viral memes, originality, or fitness for any particular purpose."

One bad Apple: Server problems spoil iPhone 3G launch

By Katie Marsal
Published: 03:00 PM EST

Apple Inc.'s iPhone 3G roll-out has quickly shifted from the much ballyhooed consumer electronics launch of the year into a nightmare for both the company and its loyal customers.

Thousands of new iPhone 3G buyers around the world are stuck Friday with iPhone's that aren't able to make calls, as the iTunes servers required to fully activate them experienced a high-tech meltdown due to an overwhelming number of simultaneous requests, ultimately falling offline.

The issues almost immediately soured the US launch of the highly anticipated touchscreen handset, as the backlog of activations kept thousands of other customers waiting in long lines outside of retail stores much longer than they or Apple had anticipated.

What's more, the problems trickled down to first-generation iPhone owners who were attempting to upgrade their devices with version 2.0 of the handset's software, which was also released Friday. Unlike previous updates, the 2.0 release completely erases all data on first-generation iPhones and deactivates them before installation.

After installation, the phones are required to connect to Apple's iTunes servers for reactivation -- the same servers that had fallen offline due to requests for new iPhone 3G activations. As such, existing iPhone owners attempting to update their software were also left with phones that were unable to make calls.

The issues appear to be a result of Apple underestimating the number of simultaneous worldwide connections to its iTunes servers during the iPhone 3G launch, a problem that wasn't helped by the simultaneous release of new software updates for existing owners.

Unlike last year, when the Cupertino-based company launched its first-generation iPhone exclusively in the U.S. and then later followed up with successive launches in a handful of European countries, this year's launch kicked off in 21 countries over the course of 24 hours.

Attempting to stifle the grey market for iPhones that were being purchased in the U.S., then unlocked and resold overseas at higher prices, Apple also did away with home activation, mandating that each and every new iPhone 3G sold in the U.S. be fully activated before it leaves the store.

Apple has also been experiencing a number of problems getting its new set of 'MobileMe' online tools up and running smoothy. The $99 per year 'push' email and calendar service launched early Thursday morning but was still facing a large number of issues outside of email as of Friday afternoon."



(Via AppleInsider.)

Package Managers As Achilles Heel

Package Managers As Achilles Heel: "Re:Sounds real and exploitable.. (Score:5, Insightful)
by jmorris42 (1458) * <`jmorris' `at' `beau.org'> on Thursday July 10, @08:22PM (#24145841) Homepage
> One long term solution would be to sign package metadata and serve
> it only from one central location, over https/sftp.

Even that won't help. The authors got so caught up in the complex exploiting they didn't notice the BIG implication of their work. The problem can't be fixed with tech, crypto or anything but https connects to known to be trusted mirror operators.

Follow along as I demonstrate. Spamgang wants zombies so they install a massive mirror farm for all of the major distros. They run it perfectly, fully updated with upstream as fast as their phat pipe can get it, perfectly signed metadata, packages and everything offered by http or https. Then they wait.

Sooner or later another remote root bug, in openssh for example, will hit and they are ready. Thousands of machines either automatically connect or their owners see the story here on /. and hit the update button. They download that signed, correct metadata and sure enough their machines realize they need that new openssh package and ask the mirror for it. And are 0wned a few milliseconds later.

Because in the act of requesting the package all those machines just told the spamgang that a specific IP is a) running openssh, b) it is the vulnerable version and c) that host is currently connected to the network and very likely has the vulnerable software running. So in the time it takes the updated package to transfer, unpack and install they have ample time to get in and install a rootkit. The beauty is that the victim will patch the hole and thus prevent anyone else from getting the zombie.

Wait a random time before beginning to use the new zombies to help prevent people from getting wise to what is happening and the spamgang could likely get away with it for years."



(Via Slashdot.)

Wednesday, 9 July 2008

The new 3G iPhone - why?

Is everybody certifiably insane? People queuing for a week outside the Apple Store in NY, O2 in the UK finding their servers glowing red-hot as millions of Brits attempt to get the jump on the 8a.m. worldwide opening tomorrow. Why? It's a phone FFS! It's not the 2nd coming. It's not the last gig ever by say, The Who. It's a phone! And as it's not able to be bought generally without a long term contract (and a more expensive one than before!), what's the point?

The queue:

iphoneline01.jpg

You can of course send me one. But I'm not wasting my money until somebody figures out a way to (a) jail-break it and (b) more importantly, not require my first born in payment (hear that Ben?).

And this is what it looks like in case you're going shopping, despite what I've said :)

apple-iphone-3g-black-1.jpg

"I've got the brains, you've got the looks"...

Matt brilliantly suggested that a great Web 2.0 mashup would be an unholy combination of Pandora and Musicovery (which I posted about earlier this week) - taking the formers database and marrying it with the looks of the latter would result in a stunning service. Not sure if either API (or database) is accessible to the developers out there, so these two may just have to get together for real. So, remember you read it here first!

Tuesday, 8 July 2008

The Faces...

...just one of the bands available from the site in my previous post. I remember seeing them soooo many times.

I can't recall who said something along the lines of "The Faces always operated on the principle that they were as wiped out as their audience" or words to that effect. But it was true.

And this from Wolfgang's Vault description of them:

faces-dvd-2.jpg

"Everything you ever wanted to believe about rock ’n’ roll is true, and here’s proof: five of the scrawniest, scraggliest moppets in all the kingdom scraped themselves up off the pub floor, climbed on stage and created magic.
In the wake of calamitous label mismanagement that left one of the most important bands of the mod/psychedelic movement creatively hog-tied and practically bankrupt, Small Faces singer/guitarist Steve Marriott jumped ship to form Humble Pie with Peter Frampton (things must have been desperate!), setting the best rhythm section in Britain adrift on a sea of ale without a captain. So prodigious were Marriott’s talents that it took two musicians to fill his shoes. Luckily, a couple of guys that would soon be among the most recognizable and popularly adored rock stars in the world found their social calendars surprisingly wide open at the dawn of the ‘70s.
And so it was, with Ron Wood contributing thick and fuzzy slide guitar and Rod Stewart out front wheezing like the Stax horn section after a carton of unfiltered Pall Malls, that the Small was dropped from their moniker and the tight R&B focus was dropped from their sound in favor of a hard and sloppy brand of country-soul. The result was crude perfection - like the Stones with a glassy-eyed grin instead of a sneer. Sadly, nothing beautiful lasts forever, and the Faces, such as they were, couldn’t compete with the rocketing stardom of their jet-setting singer’s solo career.
This concert, recorded shortly before their demise, is a good example of what was right and what was wrong with the Faces. After bassist/singer Ronnie Lane’s unceremonious exit from the touring version of the band, live sets played out rather like a Rod Stewart solo gig. Those familiar with the near-flawless rock ‘n’ folk of Stewart’s first few solo records will recognize some of his mega-hits and equally brilliant, lesser-known songs (as well as a questionable taste in cover material which would later mar his reputation); but sorely missed is Lane’s country heart-ache and distinctly English humor. Woody and the boys hold it together just barely, providing the ramshackle performance and rollicking good time that the punters paid to see. Particularly great are a slow and loose version of Sam Cooke’s "Bring It on Home to Me" and the classic "Stay with Me,” featuring a breakdown that threatens to literally breakdown.
Their reputations as musicians and generally wild and crazy guys guaranteed ample opportunities following their break-up, and the members of the Faces would go on to support the best in the industry, most notably The Rolling Stones (Wood, McLagan) and The Who (Jones). But even when playing with giants, these lads were hard-pressed to match the well-crafted chaos they first made together."

Songs from the vaults - a treasure trove!

For those of us of a certain age (or older), this site is a gold-mine. Here's what they say: "Wolfgang’s Vault is the home for the past, present and future of live music. This is the exclusive destination for The Bill Graham Archives, the King Biscuit Flower Hour and the Record Plant along with a dozen other archives that live here, and are relived here.

Within the Concert Vault are thousands of carefully restored and archived concert recordings to stream for free, hundreds of which are available for download. Browse the Concert Vault at your own pace by performer, by date or by venue; make your own playlist or let us guide you through the depths of the archive with Vault Radio.

Along the walls, halls and inner vaults of Wolfgang’s Vault is the greatest collection of concert posters, rock photography, vintage t-shirts and retro t-shirts, rock gear and concert related memorabilia.

Whether you are interested in the performers of the 60s, 70s, 80s or later, or the greatest emerging performers of today, Wolfgang’s Vault is where you’ll find them, and the stories behind the scenes. Find today’s and tomorrow’s concert tours listed on Mojam and this week’s rock reviews and criticism in the reborn Crawdaddy!: The Magazine of Rock.

Thanks for touring Wolfgang’s Vault, we hope you enjoy the experience. Next time, bring along a friend."

And for a full list of streamable and/or downloadable artists, cast your eyes over this (or click here).

Monday, 7 July 2008

Nice "universe of music" idea

Musicovery.com - try it and see what you think...

Choose mood, genre and date as in this screen-shot:

Picture 1.png

Wednesday, 2 July 2008

UPDATED: A dictionary for US readers...

...if there are any readers out there, Septic or otherwise, you may find this helpful when attempting to decode some of the posts (written in The Queen' English I hasten to add - sorry Matt!) on this site http://www.peevish.co.uk/slang/ and if you're interested in Cockney Rhyming Slang, then this will help you to baffle your fellow Americans - be careful, if you think English English is hard, this is a whole new language! http://www.cockneyrhymingslang.co.uk/. And finally, from a friend on CIX, Mike Todd, here's his lexicon of terms.

Supplies of Rare Earth Elements Exhausted By 2017

"tomhudson writes 'While we bemoan the current oil crisis, I ran across an editorial that led me to research a more immediate threat. Ramped-up production of flat-panel displays means the material to make them will be 'extinct' by 2017. This goes for other electronics as well. Quoting: 'The element gallium is in very short supply and the world may well run out of it in just a few years. Indium is threatened too, says Armin Reller, a materials chemist at Germany's University of Augsburg. He estimates that our planet's stock of indium will last no more than another decade. All the hafnium will be gone by 2017 also, and another twenty years will see the extinction of zinc. Even copper is an endangered item, since worldwide demand for it is likely to exceed available supplies by the end of the present century.' More links at the journal entry.'

Tuesday, 1 July 2008

A few ads that never saw the light of day...

...and looking at them, you can see why the client may have blanched and said "not a chance!" to the latte-swilling, oooooh so well dressed ad-executive and design team tasked with "selling" his latest great concept! With thanks to Jeffrey Zeldman who has a lot more starting here and also has some very nice free icons available for download, here we go:

dare.you.gif

heroin.jpg

happy.jpg