Radical media, politics and culture.

hydrarchist's blog

www.humancasting.org

Radio is inherently a broadcast medium. The internet is inherently store-and-forward. Proxy Caching Mechanism for Multimedia Playback Streams in the Internet http://www.ircache.net/Cache/Workshop99/Papers/rejaie-html/

Proposed Humancasting network architecture http://humancasting.manilasites.com/pictures/viewer$8

(Cron or any job scheduler) + (Napster or any filesharing tool) + (OPML or any playlist format) + (WinAmp or any mp3 player) + (signed playlists to allow for editorial voice) =

music and text/flash?instead of dj voice

Push is the key

what is SYN/ACK?

http://tipster.weblogs.com/discuss http://www.sourceforge.net/projects/swarmcast/

The Case Against Micropayments http://www.oreillynet.com/pub/a/p2p/2000/12/19/micropayments.html

"Paris Metro Pricing model. http://www.research.att.com/~amo/doc/paris.metro.minimal.txt add text-to-speach processor (with a skin or filter?)

>http://openapplications.org/challenge/index.htm http://www.thetwowayweb.com/soapMeetsRss

grid computing virtualisation http://groups.yahoo.com/group/decentralization/message/1060

http://www.superopendirectory.com/about

Gartmer onP2p http://groups.yahoo.com/group/decentralization/message/1065

http://www.thetwowayweb.com/payloadsForRss Pro Mojo http://www.oreillynet.com/pub/a/p2p/2001/01/11/mojo.html

I'd like to see a system similar to this, but using some system of identifying other peers available that are "close by" (in network terms). The most obvious method of determining "peerness" would be DNS, but we are all aware of the problems of DNS & P2P systems. I'm thinking perhaps of some kind of client based trace-route program, and an algorithm which compares its traces with other peers in an attempt to find reasonably close matches.

>>> As I understand Freenet, it's slightly better than that. Freenet uses something like consistent hashing so if a node doesn't have a document you're looking for, it at least knows which of its neighbors is more likely to have the document. So at each hop a request gets a little closer to the document. >>>

There is still the problem of the Freenet topology itself. A node's neighbors are (AFAIK) purely random and do not reflect the underlying topology. So there might be another Freenet node very close to you on the Internet, but it might be far away on Freenet. Thus Freenet does not necessarily deliver data from the closest node. On Network-Aware Clustering of Web Clients Balachander Krishnamurthy and Jia Wang Proceedings of ACM SIGCOMM 2000, Stockholm, Sweden, o hop count o latency measure o bandwidth probe o physical distance o etc o weight combinations of the above. simple ping (either IP or application level). It contains information regarding hop count, router congestion/load, bandwidth, and end-node load. We (www.vtrails.com) developed a p2p application for streaming (one to many to many) the one being a coordinating server that includes an algorithmic module beining in charge of mapping the ip request and sort them (network wise & connection wise). mojo method http://groups.yahoo.com/group/decentralization/message/1111 swarmcast description http://groups.yahoo.com/group/decentralization/message/1121 serialcast or multicast?? http://www.techrepublic.com/printerfriendly.jhtml?id=r00720010103ggp01.htm http://www.peertal.com/directory/ http://www.frankston.com/public/essays/ContentvsConnectivity.asp clay speaks!! You are not mistaken. We all live in an interated prisoners dilemma, so there are rewards for co-operation that come from the growth in the system as a whole.

This is where I think Mojo Nation is blowing smoke. They've make a big deal about the Xerox Gnutella study, and use it question the intelligence of the user:

"In a similar vein, Napster and other distributed client-servers are built on the shifting sands of volunteerism. Freeloaders and parasites cannot be controlled. The freeloader gains all the benefit of the whole system and pushes the cost to those foolish enough to give away their resources."

As someone who has been foolish enough to give away my resources almost since Napster launched, I can say (along 10's of millions of others) that far from being foolish, this is one of the best software choices I've ever made.

I half-recommend* (or recommend with trepidation) Non-Zero by Robert Wright. The first third of the book notes that life is a daisy chain of non-zero-sum games, and that there are non-zero-sum economic games as well. What Napster understood was that resource allocation could be non-zero, i.e. non-Pareto optimal, if it leveraged unused resources correctly.

mccoy The "Paris Metro Pricing" model is a market-based distributed resource allocation tool to provide what Dr. Odlyzko argued was the least complex mechanism for providing best-effort quality of service when dealing with network congestion. It is not necessary to expose this sort of a system to the user and it can exist simply as an optimization mechanism within the infrastructure, but a tool that provides both distributed load balancing by shifting users towards under-utilized resources and allows for basic QoS (only if it becomes necessary, otherwise everything can run flat-out as fast as possible) can sometimes be a useful thing to have available.

Both BearShare and LimeWire have features new releases that highlighted fairly extensive freeloader protection built-in to the clients. EDonkey 2000 has also implemented a rather clever mechanism for accomplishing something similar to our swarm downloading architecture by letting users who host the same file to answer queries for different byte ranges within that file. This mechanism is not as aggressive about marshalling lots of agents to a specific download tasks but as a passive replication and parallel downloader it seems to be a good idea. more mccoy signpost http://groups.yahoo.com/group/decentralization/message/1207 On NAT http://msdn.microsoft.com/library/default.asp? URL=/library/techart/Nats2-msdn.htm lucas on swarmcast

This is to confirm the Social Software gathering on November 22 and 23 in New York City, at NYU.

The current list of attendees and potential attendees is

Brad Fitzgerald (LiveJournal) Cameron Marlow (Blogdex) Chris Meyer (CGEY/CBI) Clay Shirky (NYU) Cory Doctorow (boingboing) Danny O'Brien (NTK) Geoff Cohen (CGEY/CBI) JC Herz (Joystick Nation) Jeff Bates (Slashdot) Jerry Michalski (Sociate) Jessica Hammer (Kleene-Star) Jon Udell (Byte) Marko Ahtisaari (Nokia/Aula) Matt Jones (BBC) Rael Dornfest (O'Reilly) Ray Ozzie (Groove) Rudy Ruggles (CBI) Rusty Foster (K5) Scott Heiferman (Meetup) Steven Johnson (Plastic) Tim O'Reilly (ORA) Ward Cunningham (Wiki)

with a dozen or so additional invites pending.

Also, an invite to an optional mailing list called social_software will arrive under separate cover. Between now and November, this is just for attendees of the meeting, but if the discussion gets lively (as I expect it will), I imagine we'll make it world-subscribable at the end of November.

We'll be meeting over two days, a Friday and a Saturday. The Friday sessions will be about introducing our work to one another, and uncovering important themes, wishlists, and possible fruitful areas for new research or code.

The sessions will be up-tempo, informal, and conversational. We will begin each session with a few of the participants offering descriptions of some piece of software or some open problem, and will then move to group conversation. Then we'll break (where, as we all know, the counter-conference will establish itself according the the Rules of Hallway Conversations), and do it again, with a new set of problems or themes.

Among the possible topics are: - Identity, namespaces, and personality of individuals and groups - Searching, threading, and filtering of people and content - Are there interfaces that could help a user decipher the overall mood of the group? - Why are visual elements (e.g. digital photography, cams, shared whiteboards) so poorly integrated into current social software? - Why is it so hard to get a group of people to decide anything online? - What is currently hard to do in an online group that should be easy? - Easy that should be hard? - Can we get the advantages of email and BBSes (longer, better edited, more thoughtful posts) into real-time environments like IM.

and, of course, whatever else comes up while we're talking.

There will be a group dinner Friday night, at a suitably veg-friendly establishment.

The Saturday sessions will be all brainstorming -- making lists of potentially fruitful areas of research, features, as well as lists of problems to solve in future generations of social software, ending with attempts to describe possible new types of social software thatcould be built with today's technology.

-clay

-=- Social Software: The gathering and its goals

We are living in a golden age of social software. Only twice before have we had a period of such intense innovation in software used by interacting groups: once in the early 70s, with the invention of email itself, and again at the end of that decade with Usenet, the CD-Simulator (the precursor to irc), and MUDs. This is a third such era, with the spread of 'writeable web' software such as weblogs and wikis, and peer-to-peer tools such as Jabber and Groove greatly extending the ability of groups to self-organize.

Every time social software improves, it is followed by changes in the way groups work and socialize. One consistently surprising aspect of social software is that it is impossible to predict in advance all of the social dynamics it will create. Recognizing this, the Social Software Summit seeks to bring together a small group of practitioners and theorists (~25) to share experiences in writing social software or thinking about its effects.

The gathering will take place over two days in the late fall in NYC. Its goals are, in order of importance:

1. Introducing the participants to one another.

The great irony of social software is that many of its practitioners operate in a vacuum. We expect that simply by bringing a diverse and talented group together, we can generate a wealth of significant new ideas.

2. Spur new efforts.

The current generation of social software is still rough-hewn. Neither the designers nor the users have settled on the ideal interfaces, system behaviors, or feature lists for the newest pieces of software, such as wikis and weblogs, and even older software, such as email and instant messaging applications, are still being adapted to new purposes. The Social Software gathering will include attempt to articulate possible new features, interfaces and tools.

3. Improving the literature.

Too much of the literature concerning social software focusses on the 'whole worlds' model, where all-encompassing environments such as MUDs or multi-player games are treated as emblematic of social software generally. In fact, the most important social software has tended to be much looser -- mailing lists, Usenet, even the humble CC line. Likewise, many of the pieces of social software being created today do not aim to create whole worlds for their users, but to perform certain functions well. Though the content of the meeting itself will be off the record, participants are encouraged to write about their own experiences and observations (as if we could stop you), and we will be producing a conference blog after the fact to point to the work and thoughts of the participants.

To exagerrate the point for emphasis, one could say that intellectual property and its conceptual neighbours may bear the same relationship to the information society as the wage-labor nexus did to the industrial manufacturing society of the 1900s.

"If you keep the code free you can do things with it, that affect the physical layer of the net, tha affect the content layer of the net, that affect the meaning of the net.... Everything's going to flow in the wires, just don't let thm notch up th resistance at the ends.

We're going to have to take at least a piece of it out of the pipes, we're going to have to put it in the air, we are going to have to generate a network that nobody owns. And in order to do that we are going to have to rely to a substantial measure on carriers that nobody owns either. We are going to have to build a layer of communication which consists of all of us routing one anothers traffic., completely without any place where anybody bottleknecks anything. In that ntwork censorship is failure and is routed around, but so is property - that gets routed around, so is control failure - that gets routed round. The wires in the ground are really important too, but we will not get that piece of the story straight unless we liberate enough bandwidth that has no carriers, to make equality."

Someone's Looking At You

On a night like this I deserve to get kissed at least once or twice You come over to my place screaming blue murder, needing someplace to hide. Well, I wish you'd keep quiet, Imaginations run riot, In these paper-thin walls. And when the place comes ablaze with a thousand dropped names I don't know who to call. But I got a friend over there in the government block And he knows the situation and he's taking stock, I think I'll call him up now Put him on the spot, tonight.

They saw me there in the square when I was shooting my mouth off About saving some fish. Now could that be construed as some radical's views or some liberals' wish. And it's so hot outside, And the air is so sweet, And when the pressure drop is heavy I don't wanna hear you speak. You know most killing is committed at 90 degrees. When it's too hot to breathe And it's too hot to think.

There's always someone looking at you. S-s-s-s-someone. They're looking at you.

And I wish you'd stop whispering. Don't flatter yourself, nobody's listening. Still it makes me nervous, those things you say. You may as well Shout it from the roof Scream it from your lungs Spit it from you mouth There's a spy in the sky There's a noise on the wire There's a tap on the line And for every paranoid's desire...

There's always Somone looking at you. S-s-s-s-someone looking at you... They're always looking at you.

(written by Bob Geldof) (taken from the album "The fine art of surfacing")

slash under ddos attack.

and Graham on the offensive: > Secondly Jamie King has recently written an intersting essay that ties > parts of the debate around copyright enclosure and the italian > immaterial labour/general intellect debate together: Towards an Army of > Ideas - Oppositional Intellect and the Bad Frontier > http://slash.autonomedia.org/article.pl?sid=02/09/16/1644231&mode=nested >

I've been arguing with Stefan Mz over similar ideas to the ones in this essay, so I might as well further my reputation of disagreeing with everything and being the odd one out ;-)

I liked the use of Winstanley. But as well as his general ideas quoted in the essay he also had ideas on what would now be called IP - starting from the idea that 'Kingly power hath crushed the spirit of knowledge and would not suffer it to rise up in its beauty and fulness' to a concrete program for education, science and an alternative to the patent system (in The Law of Freedom).

So what are the equivalent concrete ideas in this essay? As I read it, it says we need to drop defence of the 'information commons' (the goal of 'left-liberal-lawyer lobbyists, NGO gonks, and wild-eyed info egologists' - alas poor Lessig - 'necessarily failing and doomed'), and replace it by a more active strategy of 'constituting a shared community of ideas that, expecting such co-option and acting in prescience, deliberately designs itself to appear, perhaps, palatable, but to be in fact poisonous [to capital]'. And this poison pill is to be 'a return to Artaudian insanity via Burrough's 'language virus''.

Well, the first problem for me with this is just the language. Stefan mentioned that the language of the German translation of Empire is 'ugly'. I doubt if it's any uglier than the original; and I have real problems reading or taking anything seriously that comes out of the whole Deleuze/Guattari tradition just because of this. Winstanley had a much better way of writing - clear, immediately understandable to everyone (ok, he uses religious phraseology - but that was intelligible to everyone when he wrote). I simply don't understand what a return to Artaudian insanity is (reminds me of the old Beatle's song 'all you need to do is change your head', but I hope it isn't..)

More importantly, the whole idea here seems to be wrong. The 'poison pill' already exists: it's free software, and everything associated with it. People defending the 'information commons' are part of the defence of that too; Lessig and others are allies, not 'wild-eyed gonks' or whatever. The article is asking us to desert our allies, when we need to be helping them. Free software is still something that can potentially be destroyed; every ally we can get to stop that is a plus.

So how do two people defending the same basic set of ideas arrive at such opposite conclusions (he asks, rhetorically...). This is the bit that repeats my argument with Stefan Mz:

Jamie quotes in apparent agreement 'the intellectual activity of mass culture, [is] no longer reducible to simple labor, to the pure expenditure of time and energy', which I would also agree with. But then how can it be that: 'The expected huge increase in the value of intellectual labour is occurring'? I think the two statements contradict one another. Intellectual labour has no more value than it ever had; an economy based on it is not one based on value. It can only be forced into the mould of value by the most extreme contortions, arbitrary laws, etc.

When Marx wrote about the 'general intellect' it was in exactly this context - trying to guess how the contradictions of value would eventually drive capitalism to a point where it could no longer reproduce itself successfully. As it happens his preferred solution was a very roundabout one via fall in profit and terminal crisis, rather than the direct one that is actually happening - but in either case the point is that value is not eternal, nor is capitalism a self-perpetuating system (an Althusserian orrery) but a finite one. One the one hand mass piracy is a sign of that end from within, on the other free software is the sign of an alternative. Not a magic alternative appearing from nothing, but one produced from the system itself. So it can't be co-opted.

That's quite enough for now - hope that wasn't too rude a welcome to the list-en mailing list! autonomedia.org looks an interesting site; I didn't know it... :-)

Graham

Remeber that most of these property theorists, these exclusionists, on the other side are distributors - they don't make anything. They say they make something but the work for hire principle is all that allows them to maintain that claim. They distribute stuff they say is theirs, you remember Mr. Sherman's difficulty on this point. Not because they thought it up, but because they paid money to people who thought it up, they are simply buying and selling the right to distribute, and they have no reason to exist. They have no business to exist. So we have to do something to take their existence away from them, fortunately it's not too complicated, we just have to ignore them. Then they will make trouble.Then they will attempt to coerce people.Then they will put 15 year olds in jail.Then they will arrest russians in America.Then they will do various egregious things.The we will fight.Then they will beat us.Then we will win. Because the more they disintermediate, the more there is only them and us. Oh do you remember capitalism? The bourgeois, they made everything so much simpler, the class strucure got narrower it came down only to those exclusive groups the bourgeoisie and the proletariat. Everything got simpler, those old social structures, they collapsed, "everything solid melts into air", you will recall. So in the end it's just them and us; they're people, we're people, eyeball to eyeball, they've got a laptop, we've got a laptop. Doesn't look good for them does it? As long as I've got a laptop they're in trouble, they've got to take it away from me, they've got to put something inside of it that says it's not mine any more. And you just give me a neural interface and they've gotta get even closer inside. And the world's not got to let them do that. Because for that people will fight.Larry says free labour.Free soil you'll recall went along with that, so I say free bandwidth. Free men went along with that, I say free minds.

Don't give up until we're free. Don't give up because you have a compromise that's okay now, so we can ride a little bit closer to the front of the bus than we used to.Freedom.Freedom.

Now.

Lit

New linkGreplaw from the Berkman Cente, looks like a slashcode based site

Intro

Description of (1) Content Distribution Networks Peer networks can be used to deliver the services known as Content Distribution Networks (CDNs), essentially comprising the storage, retrieval and dissemination of information. Companies such as Akamai and Digital Harbour have already achieved significant success through installing their own proprietary mdels of this function on a global network level, yet the same functions can be delivered by networks of users even where they have only a dial-up connection. Napster constituted the first instantiation of this potential and subsequent generations of file-sharing technology have delivered important advances in terms of incrasing the robustness and efficiency of such networks. In order to understand the role that peers can be play in this context we must first examine the factors which determine data flow rates in the network in general.

(2) Content Storage Systems add

(3) Wireless community networks add

Summary of essential similarity between them.

1(a) Breakdown of congestion points on networks. The slow roll-out of broadband connections to home users has concentrated much attention on the problem of the so-called 'last mile' in terms of connectivity. Yet, the connection between the user and their ISP is but ne of four crucial variables deciding the rate at which we access the data sought. Problems of capacity exist at multiple other points in the network, and as the penetration of high speed lines into the 'consumer' population increases these other bottlenecks will becme more apparent. If the desired information is stored at a central server the first shackle on speed is the nature of the connection between that server and the internet backbone. Inadequate bandwidth or attempts to access by an unexpected number of clients making simultaneous requests will handicap transfer rates. This factor is known as the 'first mile' problem and is highlighted by instances such as the difficulty in accessing documentation released during the clinton impeachment hearings and more frequently by the 'slash-dot effect'. In order to reach its destination the data must flow across several networks which are connected on the basis of what is known as 'peering' arrangements between the netwrks and faciltated by routers which serve as the interface. Link capacity tends to be underprovided relative to traffic leading to router queuing delays. As the number of ISPs continues to grow this problem is anticipated to remain as whether links are established is essentially an economic question. The third point of congestion is located at the level of the internet backbone through which almost all traffic currently passes at some point. The backbones capacity is a function of its cables and more problematically its routers. A mismatch in the growth of traffic and the pace of technological advance in the area of router hardware and software package forwarding. As more data intensive trasfers proliferate this discrepancy between demand and capacity is further exacerbated leading to delays. Only after negotiating these three congestion points do we arrive at delay imposed at the last mile.

Assessing Quality of Service What are the benchmarks to evaluate Quality of Service ("Typically, QoS is characterized by packet loss, packet delay, time to first packet (time elapsed between a subscribe request send and the start of stream), and jitter. Jitter is effectively eliminated by a huge client side buffer [SJ95]."Deshpande, Hrishikesh; Bawa, Mayank; Garcia-Molina, Hector, Streaming Live Media over a Peer-to-Peer Network) jm For those who can deliver satisfactio of such benchmarks the rewards can be substantioal as Akamai demonstrates: 13,000 network provider data centers locations edge servers click thru - 20%- [10 - 15% abdonment rates] [15% + order completion]

Although unable to inusre the presence of any sepcific peer in the network at a given time, virtualised CDNs function by possessing a necessary level of redundancy, so that the absence or departure of a given peer does not undermine the functioning of the network as a whole. In brief, individual hosts are unreliable and thus must be made subject to easy substitution. From a techniucal vantage point the challenge then becomes how to polish the transfer to replacement nodes (sometimes reffered to as the problem of the 'transient web').

To facilitate this switching between peers, the distribution level applications must be able to identify alternative sources for the same content, which requires a consistent indentification mechanism, so as to generate a 'content addressable web', currently absorbing the efforts of commercial and standard setting initiatoives [Bitzi, Magnet, OpenContentNetwork].

Techniques used for managing network congestion: (b) - load balancing/routing algorithms "Load balancing is a technique used to scale an Internet or other service by spreading the load of multiple requests over a large number of servers. Often load balancing is done transparently, using a so-called layer 4 router?." [wikipedia] Lb Appliances LB Software LB Intelligent Switches Traffic Distributors - Cisco (DistributedDirector), GTE Internetworking (which acquired BBN and with it Genuity's Hopscotch), and Resonate (Central Dispatch) have been selling such solutions as installable software or hardware. Digex and GTE Internetworking (Web Advantage) offer hosting that uses intelligent load balancing and routing within a single ISP. These work like Akamai's and Sandpiper's services, but with a narrower focus. - wired

- NAT (Network Address Translation) Destination NAT can be used to redirect connections pointed at some server to randomly chosen servers to do load balancing. Transparent proxying: NAT can be used to redirect HTTP connections targeted at Internet to a special HTTP proxy which will be able to cache content and filter requests. This technique is used by some ISPs to reduce bandwidth usage without requiring their clients to configure their browser for proxy support using a [wikipedia] layer 4 router.

- caching

- Akamai Akamai freeflow hardware software mix: algorithms plus machines mapping server (fast to check hops to region) and content server http://www.wired.com/wired/archive/7.08/akamai_pr. html sandpiper applications Data providers concerned to provide optimal delivey to end users are increasingly opting to use specialist services such as Akamai to overcome these problems. Akamai delivers faster content through a combination of propritary load balancing and distribution algorithms combined with a network of machines installed across hundreds of networks where popularily requested data will be cached. (11,689 servers across 821 networks in 62 countries). This spead of servers allows the obviation of much congestion as the data is provided from the server cache either on the network itself (bypassing the peering and backbone router problems and mitigating that of the first mile) or the most efficient available network given load balancing requirements.

(c) Historical Evolution of filesharing netwoks

Popular filesharing utilities arose to satisfy a more worldly demand than the need to ameliorate infrastructural shortfalls.

- Napster When Shaun Rhyder released his Napster client the intention was to allow end-users to share MP3 files through providing a centralised index of all songs available on the network at a given moment and the ability for users to connect to one another directly to receive the desired file. Thus Napster controlled the gate to the inventory but was not burdened with execution of the actual file transfer that occurred over HTTP (insert note on the speculative valuation of the system provided by financial analysts, with qualification). Essentially popular file sharing utilities enable content pooling. As is well known, the centralised directory look-up made Napster the subject of legal action, injunction and ultimately decline.

Nonetheless, Napser's legal woes generated the necessary publicity to encourage user adoption and for new competitors to enter the market and to innovate further. In the following section I describe some of the later generations of file sharing software and chart their innovations which have brought them into a space of competition with Akamai et al.

- Gnutella Original implementation has been credited to [Justin Frankel]? and [Tom Pepper]? from a programming division of AOL (then-recently purchased Nullsoft Inc.) in 2000. On March 14th, the program was made available for download on Nullsoft's servers. The source code was to be relased later, supposedly under the GPL license. The event was announced on Slashdot, and thousands downloaded the program that day. The next day, AOL stopped the availability of the program over legal concerns and restrained the Nullsoft division from doing any further work on the project. This did not stop Gnutella; after a few days the protocol had been reverse engineered and compatible open source clones started showing up. (from Wikipedia) The Gnutella network (Bearshare/Limewire) represents the first, which is decentralized client server application. This allows a much more robust network in the sense that connectivity is not dependent on the legal health of a single operator. A trade-off with this is inefficiency in the locating of files and the problem of free riding users, which actually impede the functionality of the system beyond simply failing to contribute material. Limewire addresses this problem to some degree by providing the option to refuse to download files to users who do not share a threshold number of files. Unfortunately this cannot attenuate the problem of inefficient searches per se, merely offering a disciplinary instrument to force users to contribute. In order to sharpen search capacities in the context of a problematic network design, these networks have taken recourse to nominating certain nodes as super-peers, by virtue of the large number of files they are serving themselves. While essentially efficacious, the consequence is to undermine the legal robustness of the network. The threat is made clear in a paper published last year by researchers at PARC Xerox that analyzed traffic patterns over the Gnutella network and found that one per cent of nodes were supplying over ninety per cent of the files. These users are vulnerable to criminal prosecution under the no electronic theft act and the digital millennium copyright act. The music industry has been reluctant to invoke this form of action thusfar, principally because of their confidence that the scaling problem of the Gnutella community reduces the potential commercial harm it can inflict. As super-peering etc. becomes more effective this may change

- Fast Track/Kazaa Similar systems are now been offered by these companies to commercial media distributors such as Cloudcast (Fasttrack) and Swarmcast, using technical devices to allow distributed downloads that automate transfer from other notes when one user logs off. The intention here is clearly the development of software based alternatives to the hardware offered by Akamai, the principle player in delivering accelerated downloads and used by CNN, Apple and ABC amongst others.

- Edonkey/Overnet, Freenet Edonkey and Freenet distinguish themselves fro the other utilities by their use of hashing so as to identify and authenticate files. As data blocks are entered into a shared directory a hash block is generated (on which more below). Freenet introduced the idea of power-law searches into the p2p landscape, partially inspired by the speculation that the gnutella network would not scale due to a combination of its use of broadcast search model, the large number of users on low speed data connections, and the failure of many users to share. Edonkey became the first to poularise p2p weblinks and to employ the Multicast file Transfer Protocol so as to maximise downlaod speed by exploiting multiple sources simultaneously and allowing each user to become a source of data blocks as they were downloaded. (ii) Search methods Broacast. Power/ law. Centralised look up. Milligram, Hess, Edonkey bots, (iii) KeyTerms - Supernodes In the Gnutella networks searches are carried out owhat is called a broadcast model. Practically this means that the request is passed by the node to all the nodes to which it is connected, which in turn is forwarded to other nodes etc. The respinses of each node consume bandwidth and thus must be minimised, particluarly where many nodes are (a) operating on a low bandwidth connection and of limited utility for provisioning and (b) not sharing significant amounts of data. To overcome this problem, gnutella clients know limit their requests to 'superpeers' that have enough network respources to function efficiently and a c as a ephmeral archives for smaller nodes in their vicinity.

- Cloudcast/Swarmcast

- Multicast/Swarmed Downloads (Bearshare, Limewire, Sharereaza) File transfer between two peers fails to maximise bandwidth efficiency due to the congestion problems outlined at the beginning of the chapter. Thus where the file is available from multiple sources it will be different components will be downloaded simulatneously so as to minimise the total time of completion. Under the MFTP prootcol which forms the basis for Edonkey/Overnet this also allows other clients to initiate downloading from a partial download on the disk of another peer. [Chck whether this is the case for the others too]. - Hashing In June 2002 the media reported that music companies were now employing a company called 'Overnet' to introduce fake files into the file sharing webs, something which many users had suspected for some time. Fortunately a solution lay close at hand and in the case of one network had already been implemented: unique cryptographic hashes based upon the size of the file which ultimately constituted a reliable identifier. Edonkey users had already established portals for what became known as 'P2P web links', where independent parties would verify the authenticity of the files and then make their description and hash available through a site dedicated to highlighting new releases combined with a searchable database. These sites (sharereactor, filenexus, filedonkey) did not actually store any of the actual content files themselves, merely serving as a clearing house for metadata. Need for content verification arose first on edonkey due to the proclivity of is users to share very large files - often in excess of 600 mb - whose transfer could often require several days, and hence implied a significant waste of machine resources and human efforts should the data turn out to be corrupted in anyway. - Metadata Given the enormous and constantly expanding volume of information, it is plain that in order to access and manage it efficintly something broadly equivalent to the Dewey system for library organisation is required. Where metadata protocols are collectively accepted they can signifcantly increase the efficency of searches through the removal of ambiguities about the data's nature. Absence of standardised metadata has meant that search engines are incapable of reflecting the depth of the web's contents and cover it only in a partial manner. Fruitful searches require a semantically rich metadata structure producing descriptive conventions but pointing to unique resource identifiers (e.g. URLs). Apart from the failure to agree collective standards however, the constant threat of liigation also discourages the use of accurate metadata so that content can be serceted, made available only to those privy to a certain naming protocol, and reaching its acme in programs such as 'pig latin'.

(d) Comparison of Akamai with software based alternative.

(e) Deviations for pure p2p model

(f) Problems - Appropriation by proprietary technologies Napster was a copyrighted work, so that once it become subject to legal action no further conduits to the music-pool were available. Gnutella is an open network shared by multiple applications some of which have opted for GPL development (such as Limewire (out of enlightened self-interest) and Gnucleus (out of far-sightedness and commitment to the free software model)) whereas others have remained proprietary. By and large however, Gnutella developers appear to have tended towards co-operation as evidenced by the Gnutella developers list. Their coherence is likely galvanised by the fact they are effectively in competition more with the fasttrack network (Kazaa, Grokster) operating on a strictly proprietary basis. The hazards entailed with reliancy on a proprietary technology even in the context of a decentralised network were manifested in March 2002 when the changes to the protocol were made and the owners refused to provide an updated version top the most popular client, Morpheus, whose users were consequently excluded from the network. One reason suggested at the time was that the elimination of morpheus was brought on by the fact that it was the most popular client largely due to the fact that it did not integrate spyware monitoring users activity; their elimination effectively provided the opportunity for their two rivals to divide up their users between them.

Ironically, Morpheus was able to relaunch within three days by taking recourse to the Gnutella network and bu appropriating the code behind the Gnucleus client with only minor, largely cosmetic, alterations. Nonetheless, the incident highlights the weaknesses introduced into networks where one plaayer has the capacity to sabotage the other and lock their users (along with their shared content) out of the network.

- Free riding Freeriding and Gnutella: The Return of the Tragedy of the Commons: Bandwidth, crisis of P2P, tragedy of the commons, Napster's coming difficulty with a business plan and Mojo Karma. Doing things the freenet way. Eyton Adar & Bernardo Huberman (2000) Hypothesis 1: A significant portion of Gnutella peers are free riders. Hypothesis 2: Free riders are distributed evenly across different domains (and by speed of their network connections). Hypothesis 3: Peers that provide files for download are not necessarily those from which files are downloaded. " In a general social dilemma, a group of people attempts to utilize a common good in the absence of central authority. In the case of a system like Gnutella, one common good is the provision of a very large library of files, music and other documents to the user community. Another might be the shared bandwidth in the system. The dilemma for each individual is then to either contribute to the common good, or to shirk and free ride on the work of others. Since files on Gnutella are treated like a public good and the users are not charged in proportion to their use, it appears rational for people to download music files without contributing by making their own files accessible to other users. Because every individual can reason this way and free ride on the efforts of others, the whole system's performance can degrade considerably, which makes everyone worse off - the tragedy of the digital commons ." Figure 1 illustrates the number of files shared by each of the 33,335 peers we counted in our measurement. The sites are rank ordered (i.e. sorted by the number of files they offer) from left to right. These results indicate that 22,084, or approximately 66%, of the peers share no files, and that 24,347 or 73% share ten or less files. The top Share As percent of the whole 333 hosts (1%) 1,142,645 37% 1,667 hosts (5%)2,182,08770%3,334 hosts (10%) 2,692,082 87% 5,000 hosts (15%)2,928,90594%6,667 hosts (20%)3,037,23298%8,333 hosts (25%)3,082,57299%Table 1 And providing files actually downloaded? Again, we measured a considerable amount of free riding on the Gnutella network. Out of the sample set, 7,349 peers, or approximately 63%, never provided a query response. These were hosts that in theory had files to share but never responded to queries (most likely because they didn't provide "desirable" files). Figure 2 illustrates the data by depicting the rank ordering of these sites versus the number of query responses each host provided. We again see a rapid decline in the responses as a function of the rank, indicating that very few sites do the bulk of the work. Of the 11,585 sharing hosts the top 1 percent of sites provides nearly 47% of all answers, and the top 25 percent provide 98%. Quality? We found the degree to which queries are concentrated through a separate set of experiments in which we recorded a set of 202,509 Gnutella queries. The top 1 percent of those queries accounted for 37% of the total queries on the Gnutella network. The top 25 percent account for over 75% of the total queries. In reality these values are even higher due to the equivalence of queries ("britney spears" vs. "spears britney"). Tragedy? First, peers that provide files are set to only handle some limited number of connections for file download. This limit can essentially be considered a bandwidth limitation of the hosts. Now imagine that there are only a few hosts that provide responses to most file requests (as was illustrated in the results section). As the connections to these peers is limited they will rapidly become saturated and remain so, thus preventing the bulk of the population from retrieving content from them. A second way in which quality of service degrades is through the impact of additional hosts on the search horizon. The search horizon is the farthest set of hosts reachable by a search request. For example, with a time-to-live of five, search messages will reach at most peers that are five hops away. Any host that is six hops away is unreachable and therefore outside the horizon. As the number of peers in Gnutella increases more and more hosts are pushed outside the search horizon and files held by those hosts become beyond reach.

- Trust/security Security and privacy threats constitute other elements deterring participation both for reasons relating to users normative beliefs opposed to surveillance and fear of system penetration by untrustworthy daemons. The security question has recently been scrutinised in light of the revelation that the popular application Kazaa had been packaging a utility for distributed processing known as Brilliant Digital in their installer package. Although unused thusfar it emerged that there was the potential for it to be activated in the future without the knowledge of the end-user.

- Viruses .vbs and .exe files can be excluded from searches. MP3s etc are data not executables. Virus spreads via Kazaa (but the article wrongly identifies it as a worm): http://www.bitdefender.com/press/ref2706.php Audio Galaxy: Contains really ugly webHancer spyware that may make your Internet connection unusable.

- Content Integrity Commercial operations such as Akamai can guarantee the integrity of the content that they deliver through their control and ownership of their distributed network of caching servers. Peer to Peer networks on the other hand cannot guarantee the security of the machines they icorporate and must take recourse to means of integrity verification inherent in the data being transported, as is the case with hash sums derived from the size and other charcteristics of the file (so-called 'self-verifiable URIs'). [http://open-content.net/specs/draft-jchapweske-caw-03.html] CAW lets you assemble an ad-hoc network of "proxies" that you need not trust to behave properly, because you can neutralize any attempts to misbehave. [Gordon Mohr ocn-dev@open-content.net Tue, 18 Jun 2002 11:11:28 -0700 ] Make it so he can search out the media by the hash and you reduce the trust requirements necessary -- all you need to trust is the hash source, which can come easily over a slower link. Fundamentally this factor reintroduces the problem of trust into network communications in a practical way. Whilst the threat of virus proliferation may be low, other nuisances or threats arte much more realistic. In June it was confirmed that a company named Overnet had been employed by record labels to introduce fake and/or corrupted files into shared networks in the hope of frustrating usrs and driving them back inside the licit market. This had been suspected by many users and observers for some time and in the fatermath of their confirmation arose the news that at least two other entities - the french company 'Retpan' and 'p2poverflow' - were engaged in the same activity. Where relatively small files are concerned - and the 3.5 to 5.0 megabyte size typical of a music track at 128 bitrate encoding constitutes small by today's standards - such antics, whilst inconvenient, are unlikey to prove an efficient deterrent. Given that most files have been made available by multiple users there will aways be plenty of authentic copies in circulation. The situation is quite different however relating to the sharing of cinematographic works and television programs, whose exchange has grown rapidly in thelast years principally due to the penetration of broadband and the emrgence of the DivX compression format which has made it simple to burn downloads onto single CDRs thus obviating limited hard diosk space as an impediment to the assembling of a collection. A typical studio release takes up in excess of 600 megabytes when compressed into DivX and can take anything from a day to a week to download in its entirety depending on the transfer mechansm used, speed of connection, number of nodes seving the file etc. Obviously, having waited a week one would be rather irritated to discover that instead of Operation Takedown the 600 megabyte file in fact contained a lengthy denunciation of movie piracy courtesy of the MPAA. In order exactly to counter that possibility portals have emerged on the edonkey nework (the principal filesharing type network for files of this size) whose function is to authenticate the content of hash identified files that are brough to their attention. They initiate a download, ensure the integrity of the content, and verify that the file is available on an adequate number of nodes so as to be feasibly downloaded. provided that the aforesaid criteria are satisfied, the the publish a description of the 'release' together with the necessary hash identifier on their site, this phenomenon is accelerating rapidly but the classical examples remain www.sharereactor.com, www.filenexus.com and www.filedonkey. Similar functionality can be derived from the efforts underway as part of the Bitzi metadat project mentioned above and these initiatives could stymy the efforts by the music companies to render the network cirucits useless by increasin the dead noise ration.

- Prosecution/ISP Account Termination and other Woes At the prompting of the mnusic industry the No Electronic Theft Act was intorduced in 1997 making the copying of more than ten copies of a work or works having a value in excess of a thousand dollars a federal crime even in the absence of a motibation of 'financial gain'. In august of 1999 a 22 year old student from Orgegon, jeffrey gerard Levy, became the first person indicted under the act. Subsequently there have been no prosecutions under that title. In July and August 2002 however the Recording industry Association of America publicly enlisted the support of other copyright owners and allied elected representatives in calling on John Ashcroft to commence prosecutions. As mentioned above in relation to free riding on the Gnutella network, the small number of nodes serving a high percentage of files means that such users could be attractive targets for individual prosecution.

In addition at least two companies have boasted that they are currently engaged in identifying and tracing the IP numbers of file shares (Retpan (again) and a company called 'Ranger') so as to individualise the culprits. Such a draconian option is not a wager without risks for the plaintive music companies, indeed arguably this is why they have forbeared from such a course up until now. Currently this IP data is being used howver to pressure a more realistic and less sympathetic target, namely the user's Internet Service Provider. ISPs, having financial resources, are more sensitive to the threat of litigation and positioned to take immediate unilateral action against users they feel place them in jeopardy. This has already led to the closure of many accounts, and indeed this is not a novel phenomenon, having commenced in the aftermath of the Npaster closure with moves against those running 'OpenNap'.

Hacking More recently, and with great puiblic brouhaha, the RIAA and their allies have begun pushing for legislation to allow copyright owners to hack the machines of those they have a reasonable belief are sharing files. Copyright ownbers argue that this will 'even the playing field' in their battle against music 'pirates', and legislation to this effect was introduced by representative Howard Berman (California) at the end of July 2002. As of this writing the function of this initiative is unclear as a real attempt to pursue this course to its logical conclusion will involve the protagonists in a level of conflict with users which would certainly backfire. The likelihood is that this is another salvo in the content industry's drive to force the univrsal adoption o a draw technology on hardware manufacturers.

(g) Economic Aspects - Cost structure of broadband Whilst it is obvious why users utilise these tools to extract material, it is not so plain why they should also use them to provide material in turn to others and avoid a tragedy of the commons. Key to the willingness to provide bandwidth has been the availability of cable and DSL lines which provide capacity in excess of most individuals needs at a flat rate cost. There is thus no correlation between the amount of bandwidth used and the price paid, so in brief there is no obvious financial cost to the provider. In areas where there are total transfer caps or use is on a strictly metered basis participation is lower for the same reason.

- Lost CPU cycles/Throttling bandwidth leakage Kazaa supernode will use a max of 10% of total CPU resources. Allows an opt-out. All file sharing clients allow the user ultimate contral over the amount of bandwidth to be dedicated to file transfer, but they diverge in terms of the consequences on the user's own capacity. Thus Edonkey limits download speed by a ration related to one's maximum upload. Limewire on the other hand has a default of 50% bandwidth usage but the user can alter this without any significant effects (so long as the number of transfer slots is modulated accordingly). Gnucleus offers an alternative method in its scheduling option, facilitating connction to the network during defined periods of the day, so that bandwidth is dedicated is to file-sharing outside of houyrs that it is required for other tasks.

- Access to goods The motivation atttracting participation in these networks remains that which inspired Napster's inventor: the opportunity to acquire practically unlimited content. Early in the growth of Napster's popularity users realised that other types of files could be exchanged apart from music, as all that was required was a straightforward alteration of the naming protocal such that the file appeared to be an MP3 (Unwrapper). Later applications were explicitly intended to facilitate the sharing of other media such that that today huge numbers of films, television programs, books, animations, pornography of every description, games and software are available. The promise of such goodies is obvuiously an adequate incentive for users to search, select and install a client server application and to acquire the knowledge necessary to its operation. Inuitive Graphical User Interfaces enable a fairly rapid learning curve in addition to which a myriad of users discussion forums, weblogs and news groups provide all that the curious or perplexed could demand.

- Collective Action Mechanisms Solutions? i. In the "old days" of the modem-based bulletin board services (BBS), users were required to upload files to the bulletin board before they were able to download. ii. FreeNet, for example, forces caching of downloaded files in various hosts. This allows for replication of data in the network forcing those who are on the network to provide shared files. iii. Another possible solution to this problem is the transformation of what is effectively a public good into a private one. This can be accomplished by setting up a market-based architecture that allows peers to buy and sell computer processing resources, very much in the spirit in which Spawn was created

Conclusion (h) Commercial operations at work in the area. interesting comparisan of acquisition times in TCC at p. 28 http://www.badblue.com/w020408.htmhttp://www.gnumarkets.com/ commerical implementations swarmcast, cloudcast, upriser mojo nation's market in distributed CDN.

II Content Storage Systems (a) Commodity business Describe current market. Analaogise process. Assess scale of resources available. Costs of memory versus cost of bandwidth.

III Wireless Community Networks (a) Basic description The physical layer i) 802.11b ii) Blue tooth iii) Nokia Mesh Networking

Breaking the 250 feet footstep DIY www.consume.net

b)Public provisioning

c) Security Issues

d) Economics

Pages

Subscribe to RSS - hydrarchist's blog