Radical media, politics and culture.


Maxspeak http://www.maxspeak.org/gm/

Chuck http://chuck.mahost.org/

"Attempts by the American literati to change US policy always failed in Congress in the 19th centurybecause of the opposition of American publishers...... American publishers were nothing if not enterprising.In one incident, which eerily anticipates the Internet and copyright, American agents cabled from London to the US the entire contents of a book published by the Queen within 24 hours of its release. The American public had access to hard copies within 12 hours of the end of the transmission." -Drahos, p.33 Information Feudalism

“A.Smith was essentially correct with his productive and unproductive labour, correct from the standpoint of bourgeois economy. What the other economists advance against it is either horse-piss (for instance Storch,Senior even lousier etc.), namely that every action after all acts upon something, thus confusion of the product in its natural and its economic sense; so that the pickpocket becomes a productive worker too, since he indirectly produces books on criminal law (this reasoning at least as correct as calling a judge a productive worker because he protects from theft). Or the modern economists have turned themselves into such sycophants of the bourgeois that they want to demonstrate to the latter that it is productive labour when somebody picks the lice out of his hair, or strokes his tail, because for example the latter activity will make his fat head - blockhead - clearer the next day in the office. It is therefore quite correct - but also characteristic - that for the consistent economists the workers in e.g. luxury shops are productive, although the characters who consume such objects are expressly castigated as unproductive wastrels. The fact is that these workers, indeed, are productive, as far as they increase the capital of their master; unproductive as to the material result of their labour. In fact, of course, this ‘productive’ worker cares as much about the crappy shit he has to make as does the capitalist himself who employs him, and who also couldn’t give a damn for the junk. But, looked at more precisely, it turns out in fact that a true definition of a productive worker consists in this: A person who needs and demands exactly as much as, and no more than, is required to enable him to gain the greatest possible benefit for his capitalist. All this nonsense. Digression….”

Marx, Grundrisse

In criminology as in economics there is scarcely a more powerful word than 'capital'. In the former discipline it denotes death; in the latter it has designated the 'subsrtance' or the 'stock' of life: apparently opposite meanings. Just why the same word, 'capital', has come to mean both crimes punishable by death and the accumulation of wealth founded on the produce of previous (or dead) labour might be left to etymologists were not the association so striking, so contradictory and so exact in expressing the theme of this book. For this book explores the relationship between the organised death of living labour (capital punishment) and the opporession of the living by dead labour (the punishment of capital). Peter LinebaughThe London Hanged, p. xv

As is widely known, Mauss's objective with this fine little volume is to aalyse the economic system of echange of 'total services' that lay underneath the pratice of gift giving in many cultures. The practice was intensely reciprocal, and involved not only the parties to the gifting, but also the gods and dead spirits as well. Scarificaial destruction was often arried out to please the gods, and he shows how in some cultures this became the granting of alms, whereby he gods were as content if the material wealth that would have otherwise been offered up for destrution instead went to the poor. Thus he argues that the practice of the gift was related to a concept of justice.

Reading his account, it is impossible to avert the mind form that other great thinker of moral economy, EP Thompson. In his 'The Making of the English Working Class', Thompson looks at the complex system of custom, patronage and commons that bound the British poor to the nobility prior to the advent of the industrial revolution and the vicious ideology of laissez-fire, whose productivist reverie was born in the fire of the enclosures.

But Thompson's work is simultaneously an anatomy of economic exchange and power, and the role of paternalism as social shok absorber and fire extinguisher is never allowed to disappear from sight. Likewise, and some hundreds of years earlier, the first gret English criminologist Hebry Mayhew, was to comment on the function of charity that it was a means to inibit the por frm simply seizing what they wanted and needed - a passage that incidentally was excised from the Penguin abridgement tht has been in circulation in recent times.

Thus an interesting question, it appears to me, is to wha dgree aspects of Mauss and Thompson can be combined? Mauss' blindness to power, and economic formalism (odd though it seems to mention it) appears to preclude the application of his theory as an interpretative aid whilst at the same time appearing stranhgely appropriate for a society subjected to compulsory amnesia.

Farewell beautiful Lugano my sweet land, driven away guiltlessly the anarchists are leaving, and they set off singing with hope in their heart.

It is for you exploited for you workers that we are handcuffed just like criminals. Yet our ideal is but an ideal of love.

Anonymous comrades friends who remain the social truths do spread like strong people. This is the revenge that we ask of you.

And you who drive us away with an infamous lie, you bourgeois republic will be ashamed one day. Today we accuse you in the face of the future.

Ceaselessly banished we will go from land to land promoting peace and declaring war, peace among the oppressed war to the oppressors.

Helvetia, your government makes itself someone else's slave, a brave people's traditions it offends and insults the legend of your William Tell.

Farewell dear comrades friends of Lugano farewell white snowy Ticinese mountains the knight-errants are dragged to the North.

Everyone tells me I’m a feminist. All I know is that I’m just as good as others…and that especially means men. I am definitely a socialist and I’m definitely a Republican.

Mairead Farrell, shot dead in Gibraltar, March 8th, 1988 (International Women's Day)

(iii) KeyTerms - Supernodes In the Gnutella networks searches are carried out owhat is called a broadcast model. Practically this means that the request is passed by the node to all the nodes to which it is connected, which in turn is forwarded to other nodes etc. The respinses of each node consume bandwidth and thus must be minimised, particluarly where many nodes are (a) operating on a low bandwidth connection and of limited utility for provisioning and (b) not sharing significant amounts of data. To overcome this problem, gnutella clients know limit their requests to 'superpeers' that have enough network respources to function efficiently and a c as a ephmeral archives for smaller nodes in their vicinity.

- Multicast/Swarmed Downloads (Bearshare, Limewire, Sharereaza) File transfer between two peers fails to maximise bandwidth efficiency due to the congestion problems outlined at the beginning of the chapter. Thus where the file is available from multiple sources it will be different components will be downloaded simulatneously so as to minimise the total time of completion. Under the MFTP prootcol which forms the basis for Edonkey/Overnet this also allows other clients to initiate downloading from a partial download on the disk of another peer. [Chck whether this is the case for the others too].

- Partial Download Sharing Where large files are being shared amongst a larg number of users, capacity is rigidly limited to those which have already a copy of the entire file in their shared folder. As this can take over a week to accompish, this injects a high quotient of unfulfilled demand. The Edonkey network allows users to transfer partial files from other peers, with the consequence that even where no peer may have the whole file at a give moment, all the constitiutent parts can be available aaand allow a successful transfer to take place. (Ares, Sharereaza and Gnucleus apparenltly enable PSF as well.

- Hashing In June 2002 the media reported that music companies were now employing a company called 'Overnet' to introduce fake files into the file sharing webs, something which many users had suspected for some time. Fortunately a solution lay close at hand and in the case of one network had already been implemented: unique cryptographic hashes based upon the size of the file which ultimately constituted a reliable identifier. Edonkey users had already established portals for what became known as 'P2P web links', where independent parties would verify the authenticity of the files and then make their description and hash available through a site dedicated to highlighting new releases combined with a searchable database. These sites (sharereactor, filenexus, filedonkey) did not actually store any of the actual content files themselves, merely serving as a clearing house for metadata. Need for content verification arose first on edonkey due to the proclivity of is users to share very large files - often in excess of 600 mb - whose transfer could often require several days, and hence implied a significant waste of machine resources and human efforts should the data turn out to be corrupted in anyway.

- Metadata Given the enormous and constantly expanding volume of information, it is plain that in order to access and manage it efficintly something broadly equivalent to the Dewey system for library organisation is required. Where metadata protocols are collectively accepted they can signifcantly increase the efficency of searches through the removal of ambiguities about the data's nature. Absence of standardised metadata has meant that search engines are incapable of reflecting the depth of the web's contents and cover it only in a partial manner. Fruitful searches require a semantically rich metadata structure producing descriptive conventions but pointing to unique resource identifiers (e.g. URLs). Apart from the failure to agree collective standards however, the constant threat of liigation also discourages the use of accurate metadata so that content can be serceted, made available only to those privy to a certain naming protocol, and reaching its acme in programs such as 'pig latin'.

If you right-click a search result, you'll notice a nice option called "Bitzi lookup". Bitzi is a centralized database for hashes from all major file sharing networks. If you look up a search result, you might find out that someone else has provided information about whether the file is real, what quality it is etc. This is obviously very valuable and will save you from downloading hoaxes or low quality files. The only annoying part is that the Bitzi pages are cluttered with banners. (d) Comparison of Akamai with software based alternative.

(e) Deviations for pure p2p model Fasttrack clients have certain centrealised features, not only supernodes. The reverse engineered open source Gift client was shut out through the distribuion of an update that required clients to conact a fasttrack computer for authentication, only following which could the list of supernodes be retrieved.

(f) Problems - Appropriation by proprietary technologies Napster was a copyrighted work, so that once it become subject to legal action no further conduits to the music-pool were available. Gnutella is an open network shared by multiple applications some of which have opted for GPL development (such as Limewire (out of enlightened self-interest) and Gnucleus (out of far-sightedness and commitment to the free software model)) whereas others have remained proprietary. By and large however, Gnutella developers appear to have tended towards co-operation as evidenced by the Gnutella developers list. Their coherence is likely galvanised by the fact they are effectively in competition more with the fasttrack network (Kazaa, Grokster) operating on a strictly proprietary basis. The hazards entailed with reliancy on a proprietary technology even in the context of a decentralised network were manifested in March 2002 when the changes to the protocol were made and the owners refused to provide an updated version top the most popular client, Morpheus, whose users were consequently excluded from the network. One reason suggested at the time was that the elimination of morpheus was brought on by the fact that it was the most popular client largely due to the fact that it did not integrate spyware monitoring users activity; their elimination effectively provided the opportunity for their two rivals to divide up their users between them.

Ironically, Morpheus was able to relaunch within three days by taking recourse to the Gnutella network, appropriating the code behind the Gnucleus client with only minor, largely cosmetic, alterations. Nonetheless, the incident highlights the weaknesses introduced into networks where one plaayer has the capacity to sabotage the other and lock their users (along with their shared content) out of the network. The Gnuclus codebase has now generated twelve clones.

- Free riding Freeriding and Gnutella: The Return of the Tragedy of the Commons: Bandwidth, crisis of P2P, tragedy of the commons, Napster's coming difficulty with a business plan and Mojo Karma. Doing things the freenet way. Eyton Adar & Bernardo Huberman (2000) Hypothesis 1: A significant portion of Gnutella peers are free riders. Hypothesis 2: Free riders are distributed evenly across different domains (and by speed of their network connections). Hypothesis 3: Peers that provide files for download are not necessarily those from which files are downloaded. " In a general social dilemma, a group of people attempts to utilize a common good in the absence of central authority. In the case of a system like Gnutella, one common good is the provision of a very large library of files, music and other documents to the user community. Another might be the shared bandwidth in the system. The dilemma for each individual is then to either contribute to the common good, or to shirk and free ride on the work of others. Since files on Gnutella are treated like a public good and the users are not charged in proportion to their use, it appears rational for people to download music files without contributing by making their own files accessible to other users. Because every individual can reason this way and free ride on the efforts of others, the whole system's performance can degrade considerably, which makes everyone worse off - the tragedy of the digital commons ." Figure 1 illustrates the number of files shared by each of the 33,335 peers we counted in our measurement. The sites are rank ordered (i.e. sorted by the number of files they offer) from left to right. These results indicate that 22,084, or approximately 66%, of the peers share no files, and that 24,347 or 73% share ten or less files. The top Share As percent of the whole 333 hosts (1%) 1,142,645 37% 1,667 hosts (5%)2,182,08770%3,334 hosts (10%) 2,692,082 87% 5,000 hosts (15%)2,928,90594%6,667 hosts (20%)3,037,23298%8,333 hosts (25%)3,082,57299%Table 1 And providing files actually downloaded? Again, we measured a considerable amount of free riding on the Gnutella network. Out of the sample set, 7,349 peers, or approximately 63%, never provided a query response. These were hosts that in theory had files to share but never responded to queries (most likely because they didn't provide "desirable" files). Figure 2 illustrates the data by depicting the rank ordering of these sites versus the number of query responses each host provided. We again see a rapid decline in the responses as a function of the rank, indicating that very few sites do the bulk of the work. Of the 11,585 sharing hosts the top 1 percent of sites provides nearly 47% of all answers, and the top 25 percent provide 98%. Quality? We found the degree to which queries are concentrated through a separate set of experiments in which we recorded a set of 202,509 Gnutella queries. The top 1 percent of those queries accounted for 37% of the total queries on the Gnutella network. The top 25 percent account for over 75% of the total queries. In reality these values are even higher due to the equivalence of queries ("britney spears" vs. "spears britney"). Tragedy? First, peers that provide files are set to only handle some limited number of connections for file download. This limit can essentially be considered a bandwidth limitation of the hosts. Now imagine that there are only a few hosts that provide responses to most file requests (as was illustrated in the results section). As the connections to these peers is limited they will rapidly become saturated and remain so, thus preventing the bulk of the population from retrieving content from them. A second way in which quality of service degrades is through the impact of additional hosts on the search horizon. The search horizon is the farthest set of hosts reachable by a search request. For example, with a time-to-live of five, search messages will reach at most peers that are five hops away. Any host that is six hops away is unreachable and therefore outside the horizon. As the number of peers in Gnutella increases more and more hosts are pushed outside the search horizon and files held by those hosts become beyond reach.

eMule is one of the first file sharing clients to compress all packets in real time, thereby increasing potential transfer speed (I'm not sure whether this works only with other eMule users). It further extends the eDonkey protocol by introducing a very basic reputation system : eMule remembers the other nodes it deals with and rewards them with quicker queue advanement if they have sent you files in the past. So far, the eDonkey network has relied on its proprietary nature to enforce uploading: If you change the upload speed in the original client, the download speed is scaled down as well. eMule's reputation feature may make this kind of security by obscurity (that has already been undermined by hacks) unnecessary.

More efforts to enforce sharing http://www.infoanarchy.org/story/2002/6/20/123110/395 and again Applejuice http://www.infoanarchy.org/story/2002/6/20/123110/395

Problems of Defaults: Firewalls and NAT The default firewalls on windows XP has resulted in the inaccessibility of larg numbers of files formerly available. In addition secondary connections made across NATs can make files unreachable from the exterior, a problem addressed throughthe introduction of the push command.

Primary connections typically consume more CPU cycles than secondary connections.

- Trust/security Security and privacy threats constitute other elements deterring participation both for reasons relating to users normative beliefs opposed to surveillance and fear of system penetration by untrustworthy daemons. The security question has recently been scrutinised in light of the revelation that the popular application Kazaa had been packaging a utility for distributed processing known as Brilliant Digital in their installer package. Although unused thusfar it emerged that there was the potential for it to be activated in the future without the knowledge of the end-user.

- Viruses, Trojans, Spyware and other nuisances .vbs and .exe files can be excluded from searches. MP3s etc are data not executables. Virus spreads via Kazaa (but the article wrongly identifies it as a worm): http://www.bitdefender.com/press/ref2706.php Audio Galaxy: Contains really ugly webHancer spyware that may make your Internet connection unusable. Bundled software deliverd with file sharing applications frequently includes spyware that monitors users' activity. Bundling has also been used to install CPU and bandwidth consuming programs such as Gator, and in other cases - such as that involving Limewire this year - trojans (http://www.wired.com/news/privacy/0,1848,49430,00.html).

- Content Integrity Commercial operations such as Akamai can guarantee the integrity of the content that they deliver through their control and ownership of their distributed network of caching servers. Peer to Peer networks on the other hand cannot guarantee the security of the machines they icorporate and must take recourse to means of integrity verification inherent in the data being transported, as is the case with hash sums derived from the size and other charcteristics of the file (so-called 'self-verifiable URIs'). [http://open-content.net/specs/draft-jchapweske-caw-03.html] CAW lets you assemble an ad-hoc network of "proxies" that you need not trust to behave properly, because you can neutralize any attempts to misbehave. [Gordon Mohr ocn-dev@open-content.net Tue, 18 Jun 2002 11:11:28 -0700 ] Make it so he can search out the media by the hash and you reduce the trust requirements necessary -- all you need to trust is the hash source, which can come easily over a slower link. Fundamentally this factor reintroduces the problem of trust into network communications in a practical way. Whilst the threat of virus proliferation may be low, other nuisances or threats arte much more realistic. In June it was confirmed that a company named Overnet had been employed by record labels to introduce fake and/or corrupted files into shared networks in the hope of frustrating usrs and driving them back inside the licit market. This had been suspected by many users and observers for some time and in the fatermath of their confirmation arose the news that at least two other entities - the french company 'Retpan' and 'p2poverflow' - were engaged in the same activity. Where relatively small files are concerned - and the 3.5 to 5.0 megabyte size typical of a music track at 128 bitrate encoding constitutes small by today's standards - such antics, whilst inconvenient, are unlikey to prove an efficient deterrent. Given that most files have been made available by multiple users there will aways be plenty of authentic copies in circulation. The situation is quite different however relating to the sharing of cinematographic works and television programs, whose exchange has grown rapidly in thelast years principally due to the penetration of broadband and the emrgence of the DivX compression format which has made it simple to burn downloads onto single CDRs thus obviating limited hard diosk space as an impediment to the assembling of a collection. A typical studio release takes up in excess of 600 megabytes when compressed into DivX and can take anything from a day to a week to download in its entirety depending on the transfer mechansm used, speed of connection, number of nodes seving the file etc. Obviously, having waited a week one would be rather irritated to discover that instead of Operation Takedown the 600 megabyte file in fact contained a lengthy denunciation of movie piracy courtesy of the MPAA. In order exactly to counter that possibility portals have emerged on the edonkey nework (the principal filesharing type network for files of this size) whose function is to authenticate the content of hash identified files that are brough to their attention. They initiate a download, ensure the integrity of the content, and verify that the file is available on an adequate number of nodes so as to be feasibly downloaded. provided that the aforesaid criteria are satisfied, the the publish a description of the 'release' together with the necessary hash identifier on their site, this phenomenon is accelerating rapidly but the classical examples remain www.sharereactor.com, www.filenexus.com and www.filedonkey. Similar functionality can be derived from the efforts underway as part of the Bitzi metadat project mentioned above and these initiatives could stymy the efforts by the music companies to render the network cirucits useless by increasin the dead noise ration.

- Prosecution/ISP Account Termination and other Woes At the prompting of the mnusic industry the No Electronic Theft Act was intorduced in 1997 making the copying of more than ten copies of a work or works having a value in excess of a thousand dollars a federal crime even in the absence of a motibation of 'financial gain'. In august of 1999 a 22 year old student from Orgegon, jeffrey gerard Levy, became the first person indicted under the act. Subsequently there have been no prosecutions under that title. In July and August 2002 however the Recording industry Association of America publicly enlisted the support of other copyright owners and allied elected representatives in calling on John Ashcroft to commence prosecutions. As mentioned above in relation to free riding on the Gnutella network, the small number of nodes serving a high percentage of files means that such users could be attractive targets for individual prosecution.

In addition at least two companies have boasted that they are currently engaged in identifying and tracing the IP numbers of file shares (Retpan (again) and a company called 'Ranger') so as to individualise the culprits. Such a draconian option is not a wager without risks for the plaintive music companies, indeed arguably this is why they have forbeared from such a course up until now. Currently this IP data is being used howver to pressure a more realistic and less sympathetic target, namely the user's Internet Service Provider. ISPs, having financial resources, are more sensitive to the threat of litigation and positioned to take immediate unilateral action against users they feel place them in jeopardy. This has already led to the closure of many accounts, and indeed this is not a novel phenomenon, having commenced in the aftermath of the Npaster closure with moves against those running 'OpenNap'.

Hacking More recently, and with great puiblic brouhaha, the RIAA and their allies have begun pushing for legislation to allow copyright owners to hack the machines of those they have a reasonable belief are sharing files. Copyright ownbers argue that this will 'even the playing field' in their battle against music 'pirates', and legislation to this effect was introduced by representative Howard Berman (California) at the end of July 2002. As of this writing the function of this initiative is unclear as a real attempt to pursue this course to its logical conclusion will involve the protagonists in a level of conflict with users which would certainly backfire. The likelihood is that this is another salvo in the content industry's drive to force the univrsal adoption o a draw technology on hardware manufacturers.

(g) Economic Aspects - Cost structure of broadband Whilst it is obvious why users utilise these tools to extract material, it is not so plain why they should also use them to provide material in turn to others and avoid a tragedy of the commons. Key to the willingness to provide bandwidth has been the availability of cable and DSL lines which provide capacity in excess of most individuals needs at a flat rate cost. There is thus no correlation between the amount of bandwidth used and the price paid, so in brief there is no obvious financial cost to the provider. In areas where there are total transfer caps or use is on a strictly metered basis participation is lower for the same reason.

In this case, search bandwidth consumption serves as a tax on the members of the network, which ensures that those who bring the most of that resource to the network, are those that bear the burden of running the network. infoa From an ISP point of view traffic crossing AS borders is more expensive than local traffic. We found that only 2-5% of Gnutella connections link nodes located within the same AS, although more than 40% of these nodes are located within the top ten ASs. This result indicates that most Gnutella-generated traffic crosses AS borders, thus increasing costs, unnecessarily. . Large amouts of extra-network traffic are expensive for ISPs and consequently and increasing number have been introducing bandwidth caps. One September 2002 report claimed that up to 60% of all network traffic was being consumed by P2P usage. Wide implementation of IP multicast has been suggested a potential remedy to these problems, such that once a piece of content was brought within an ISPs network, it would then be served from withing the network to other clients, and thus reduce unnecessary extra-network traffic. Interestingly, the same report argues that much of the 60% derives from search queries and advertising, the former could probably be much reduced by a shift to the pwer-law search method described above. (Source, The effects of P2P on service provider networks, Sandvine September 2002. The methodology employed by Sandvine in assembling their statistics has been criticised as conflating traditional client server downlaods with peer transfers.) 1) Service providers should silently remove caps from any transfers among users of the same ISP. Eventually networks would arise that make use of this bandwidth advantage.

- Lost CPU cycles/Throttling bandwidth leakage Kazaa supernode will use a max of 10% of total CPU resources. Allows an opt-out. All file sharing clients allow the user ultimate contral over the amount of bandwidth to be dedicated to file transfer, but they diverge in terms of the consequences on the user's own capacity. Thus Edonkey limits download speed by a ration related to one's maximum upload. Limewire on the other hand has a default of 50% bandwidth usage but the user can alter this without any significant effects (so long as the number of transfer slots is modulated accordingly). Gnucleus offers an alternative method in its scheduling option, facilitating connction to the network during defined periods of the day, so that bandwidth is dedicated is to file-sharing outside of houyrs that it is required for other tasks.

On some clients the built in MP3 players can be as cycle-comsuming as the applicatio0n itself, as is the case with Limewire. Mldonkey has been known to use early 20% of the CPU resources available. - Access to goods The motivation atttracting participation in these networks remains that which inspired Napster's inventor: the opportunity to acquire practically unlimited content. Early in the growth of Napster's popularity users realised that other types of files could be exchanged apart from music, as all that was required was a straightforward alteration of the naming protocal such that the file appeared to be an MP3 (Unwrapper). Later applications were explicitly intended to facilitate the sharing of other media such that that today huge numbers of films, television programs, books, animations, pornography of every description, games and software are available. The promise of such goodies is obvuiously an adequate incentive for users to search, select and install a client server application and to acquire the knowledge necessary to its operation. Inuitive Graphical User Interfaces enable a fairly rapid learning curve in addition to which a myriad of users discussion forums, weblogs and news groups provide all that the curious or perplexed could demand.

- Collective Action Mechanisms Solutions? i. In the "old days" of the modem-based bulletin board services (BBS), users were required to upload files to the bulletin board before they were able to download. ii. FreeNet, for example, forces caching of downloaded files in various hosts. This allows for replication of data in the network forcing those who are on the network to provide shared files. iii. Another possible solution to this problem is the transformation of what is effectively a public good into a private one. This can be accomplished by setting up a market-based architecture that allows peers to buy and sell computer processing resources, very much in the spirit in which Spawn was created

Strictly Legal Applications of Current File Sharing Applications Although litigation constantly focuses attention on the alleged copyright infringing uses of these programs large amounts of a public domain or GPL character are also shared. In addition, I belive that we are now witnessing a wider implementation of these networks for the purpose of bypassing the gatekeeping functions of the existing communicatons industry. One of the most interesting examples in this regard is that provided by Transmission Films, a Canadaia company partnered with Overnet that launched in August 2002, the most advanced iteration of the network that began with Edonkey. TF offer independent films for viewing either by streaming or download, with options also to purchase the films permanaently. Digital Rights Management is used otherwise to limit access to a five day period from user activation. Customers pay a set fee in advance and then spend the monies in their account selecting the options that they prefer.

In a similar vein, Altnet/Brilliant Digital -owners of Kazaa- have announced the integration of a micropayments facility into their client i order to facilitate acquisition of DRM protected material on their network, to this end they have made agreements with several independent music labels. http://www.slyck.com/newssep2002/091802c.html

Q: How is Overnet different from gnutella? Gnutella and Overnet are both distributed networks. But Overnet uses what is called a distributed hashtable to organize the data that is searched for. This means that nodes know what other nodes to send a search to. In Gnutella the searches and publishes are sent more or less randomly so the the network is far less efficient. It is like the difference between looking something up in a large pile of papers or in a filing cabinet.

Conclusion (h) Commercial operations at work in the area. interesting comparisan of acquisition times in TCC at p. 28 http://www.badblue.com/w020408.htmhttp://www.gnumarkets.com/ commerical implementations swarmcast, cloudcast, upriser mojo nation's market in distributed CDN. Mojo Nation has now morphed into MNet, but without the Mojo, which is to say without the idea of how to provide users with a numerical guide as to to how to organise scnat resources. Without this featur MNet seems little more than a file sharing application with out a significant userbase.

II Content Storage Systems (a) Commodity business Describe current market. Analaogise process. Assess scale of resources available. Costs of memory versus cost of bandwidth.

III Wireless Community Networks (a) Basic description The physical layer i) 802.11b ii) Blue tooth iii) Nokia Mesh Networking

Breaking the 250 feet footstep DIY www.consume.net

b)Public provisioning

c) Security Issues

d) Economics Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System (2002), Matei Ripeanu, Ian Foster, Adriana Iamnitchi, http://people.cs.uchicago.edu/~matei/PAPERS/ic.pdf

http://www.analysphere.com/09Sep02/contents.htm http://lists.infoanarchy.org/mailman/listinfo.cgi/p2pj http://www.law.wayne.edu/litman/classes/cyber http://www.noosphere.cc/peerToPeer.html http://citeseer.nj.nec.com/ripeanu02mapping.html http://www.infoanarchy.org/story/2002/8/10/33623/3436

http://www.kuro5hin.org/story/2002/1/23/211455/047 Neat, I hadn't heard of GNUnet before.

The keys look like they might be SHA1 hashes -- 20 hex bytes.

Are they hashes over the full content, the content's assigned name, or some sort of composited partial hash -- like the progressive hashing (used by Freenet) or tree hashes (used by Bitzi and OnionNetworks/CAW)?

They might want to consider optionally accepting Base32-specified keys as well... that's the default representation being used for human/text-protocol display by Gnutella, OnionNetworks/CAW, and Bitzi. http://ova.zkm.de/perl/ova-raplayer?id=1004561024&base=ova.zkm.de

No Price Signals! No Managerial Command! Little need of coordination. Incremental and granular. Diffusion of former function of the editor. Given a sufficiently large number of contributors the incentives question becomes trivial - as the level of bait offered need only be scaled to that small increment. Peer production is limited not by costs modularity - how many can participate and with what variation of investement by the actors. Can it be broken down? granularity- how small and increment can be contributed minimal investment cost of integration

decrease communication costs increase human salience

information opportuniy costs, how do you decide how to act, reduce uncertainty about different forms of action vast set of resources, many agents then it's better way of identifying the right perso for any givcen task at a givn moment economies of scale and scope. boundless rather than bounded as in the firm

other capital inputs are now chaeper, except humans! I'm not a capital inputs defection restraints CPRs, endogenous technologies, social norms, iterative peer production of integration redudancies evened out

Together, the FastTrack and Gnutella protocols currently boast an outstanding 2.9 million simultaneous users (www.slyck.com, July 2, 2002).

When Morpheus first joined the Gnutella network, its population exploded to over 500,000 users. Now we're witnessing its population hover at only 160,000. We talked to a LimeWire representative and discovered several key reasons for its decline.

Intro We estimate that if Napster were built on a cC-S architecture, for the number of songs “on” its network at its peak, Napster would have had to purchase over 5,000 NetApp F840 Enterprise Filers to host all the songs shared among its users

Using an estimnate of 65 million users at its peak, allegedly having 171 mp3s apiece for a total of 33,345 tb (costing 666, 900,000) and a webnoize estimate of three billion downlaods a month, each mp3 conservatively estimated at 3mb, would require 27, 778 mbits of bandwdth per second (45 OC 12 lines for a further total of 6,698,821 per month). Alternatively, to purchase the necessary PCs and ISP accous would cost 1,495,000,000.

The availability of large amouts of unused bandwidth and storage space on users computers has facilitated the emergence of widescale peer to peer network dedicated to sharing content. The attention focussed on file-sharing due to the contested nature of ts legality, has impeded consideration of other applications such as the virtualisation of storage space across networks (already a significant industry) and bandwidth pooling for the authorised dissemination of content lying outside the ownership and control of the major media and communications conglomerates. These peer to peer systems are typically established at an application level , employ their own routing mechanisms and are either independent or ephemerally dependent on dedicated servers.

Note that the decentralized nature of pure P2P systems means that these properties are emergent properties, determined by entirely local decisions made by individual resources, based only on local information: we are dealing with a self-organized network of independent entities.

File sharing devices behave like cahe clusters, keeping traffic local. Description of (1) Content Distribution Networks Peer networks can be used to deliver the services known as Content Distribution Networks (CDNs), essentially comprising the storage, retrieval and dissemination of information. Companies such as Akamai and Digital Harbour have already achieved significant success through installing their own proprietary mdels of this function on a global network level, yet the same functions can be delivered by networks of users even where they have only a dial-up connection. Napster constituted the first instantiation of this potential and subsequent generations of file-sharing technology have delivered important advances in terms of incrasing the robustness and efficiency of such networks. In order to understand the role that peers can be play in this context we must first examine the factors which determine data flow rates in the network in general.

(2) Content Storage Systems Fibre Channel Storage Area Networking dominates the market in storage currently. A market i storage space on end-user equipment is easy to imagine and would provide a real competitor to the incumbent market players. Principal inhibotor of tranfer speed is geographically determined latency. As storage space continues to be cheaper than bandwidth, local storage options ar attractive. Ocean store are assembling a network fo untrusted storage nodes for basic data. "Fragmentation and distribution yield redundancy. Distributed autonomous devices connected on a network create massive redundancy even on less-than-reliable PC hard drives. Redundancy on a massive scale yields near-perfect reliability. Redundancy of this scope and reach necessarily utilizes resources that lead to a network topology of implicitly “untrusted” nodes. In an implicitly untrusted network, one assumes that a single node is most likely unreliable, but that sheer scale of the redundancy forms a virtuous fail-over network. Enough “backups” create a near-perfect storage network."

These companies, like the file-sharing networks, function through virtualising resources, unting them across the network i an object oriented manner. IDC estimated the market for storage service providers at 2, 379 million dollars in 2002 (internet 3.0 at p.80).

Unused Storage Capacity "We looked at over 275 PCs in an enterprise environment to see how much of a desktop’s hard drive was utilized. We discovered that on average, roughly 15% of the total hard drive capacity was actually utilized by enterprise users." (Internet 3.0) at p.81 Economic incentive derives not from the value of memory, but rather the potentil to integrate the space into a distributed system that can obviate xpensive network transfers.

Akamai now operate in the storage busineess as well, in combination with Scale Eight who have four storage centers and provide access for cliens through a customised browser.

1(a) Breakdown of congestion points on networks. The slow roll-out of broadband connections to home users has concentrated much attention on the problem of the so-called 'last mile' in terms of connectivity. Yet, the connection between the user and their ISP is but ne of four crucial variables deciding the rate at which we access the data sought. Problems of capacity exist at multiple other points in the network, and as the penetration of high speed lines into the 'consumer' population increases these other bottlenecks will becme more apparent. If the desired information is stored at a central server the first shackle on speed is the nature of the connection between that server and the internet backbone. Inadequate bandwidth or attempts to access by an unexpected number of clients making simultaneous requests will handicap transfer rates. This factor is known as the 'first mile' problem and is highlighted by instances such as the difficulty in accessing documentation released during the clinton impeachment hearings and more frequently by the 'slash-dot effect'. In order to reach its destination the data must flow across several networks which are connected on the basis of what is known as 'peering' arrangements between the netwrks and faciltated by routers which serve as the interface. Link capacity tends to be underprovided relative to traffic leading to router queuing delays. As the number of ISPs continues to grow this problem is anticipated to remain as whether links are established is essentially an economic question. The third point of congestion is located at the level of the internet backbone through which almost all traffic currently passes at some point. The backbones capacity is a function of its cables and more problematically its routers. A mismatch in the growth of traffic and the pace of technological advance in the area of router hardware and software package forwarding. As more data intensive trasfers proliferate this discrepancy between demand and capacity is further exacerbated leading to delays. Only after negotiating these three congestion points do we arrive at delay imposed at the last mile.

Assessing Quality of Service What are the benchmarks to evaluate Quality of Service ("Typically, QoS is characterized by packet loss, packet delay, time to first packet (time elapsed between a subscribe request send and the start of stream), and jitter. Jitter is effectively eliminated by a huge client side buffer [SJ95]."Deshpande, Hrishikesh; Bawa, Mayank; Garcia-Molina, Hector, Streaming Live Media over a Peer-to-Peer Network) jm For those who can deliver satisfaction of such benchmarks the rewards can be substantioal as Akamai demonstrates: 13,000 network provider data centers locations edge servers click thru - 20%- [10 - 15% abdonment rates] [15% + order completion]

Although unable to enusre the presence of any specific peer in the network at a given time, virtualised CDNs function by possessing a necessary level of redundancy, so that the absence or departure of a given peer does not undermine the functioning of the network as a whole. In brief, individual hosts are unreliable and thus must be made subject to easy substitution. From a techniucal vantage point the challenge then becomes how to polish the transfer to replacement nodes (sometimes reffered to as the problem of the 'transient web').

To facilitate this switching between peers, the distribution level applications must be able to identify alternative sources for the same content, which requires a consistent indentification mechanism, so as to generate a 'content addressable web', currently absorbing the efforts of commercial and standard setting initiatoives [Bitzi, Magnet, OpenContentNetwork].

In short the effectiveness of any given peer neytwork will be determined by: a) Connectivity Structure and b) The efficiency with which it utilises the underlying physical topography of the network, which ultimately has limited resources.

Techniques used for managing network congestion: (b) - load balancing/routing algorithms "Load balancing is a technique used to scale an Internet or other service by spreading the load of multiple requests over a large number of servers. Often load balancing is done transparently, using a so-called layer 4 router?." [wikipedia] Lb Appliances, Software, Intelligent Switches, Traffic Distributors - Cisco (DistributedDirector), GTE Internetworking (which acquired BBN and with it Genuity's Hopscotch), and Resonate (Central Dispatch) have been selling such solutions as installable software or hardware. Digex and GTE Internetworking (Web Advantage) offer hosting that uses intelligent load balancing and routing within a single ISP. These work like Akamai's and Sandpiper's services, but with a narrower focus. - wired

- caching NAT (Network Address Translation) Destination NAT can be used to redirect connections pointed at some server to randomly chosen servers to do load balancing. Transparent proxying: NAT can be used to redirect HTTP connections targeted at Internet to a special HTTP proxy which will be able to cache content and filter requests. This technique is used by some ISPs to reduce bandwidth usage without requiring their clients to configure their browser for proxy support using a [wikipedia] layer 4 router. See Inktomi. Cahing servers intercept requests for data, checks to see whether it is present locally. If it is not, then the caching server forwrds the request to the irginator, and passess it back to the requestee having made a copy so as to serve the next quesry for the same file more quickly.

Local Internet storage caching is less expensive than network retransmission and, according to market research firm IDC, becomes more attractive by about 40% per year. Caches are particularly efficient for international traffic and traffic that otherwise moves across large network distances. at 83

content delivery network/streaming media network

- Akamai Akamai freeflow hardware software mix: algorithms plus machines mapping server (fast to check hops to region) and content server http://www.wired.com/wired/archive/7.08/akamai_pr. html sandpiper applications Data providers concerned to provide optimal delivey to end users are increasingly opting to use specialist services such as Akamai to overcome these problems. Akamai delivers faster content through a combination of propritary load balancing and distribution algorithms combined with a network of machines installed across hundreds of networks where popularily requested data will be cached. (11,689 servers across 821 networks in 62 countries). This spead of servers allows the obviation of much congestion as the data is provided from the server cache either on the network itself (bypassing the peering and backbone router problems and mitigating that of the first mile) or the most efficient available network given load balancing requirements.

(c) Evolution of filesharing netwoks

Popular filesharing utilities arose to satisfy a more worldly demand than the need to ameliorate infrastructural shortfalls.

- Napster When Shaun Rhyder released his Napster client the intention was to allow end-users to share MP3 files through providing a centralised index of all songs available on the network at a given moment and the ability for users to connect to one another directly to receive the desired file. Thus Napster controlled the gate to the inventory but was not burdened with execution of the actual file transfer that occurred over HTTP (insert note on the speculative valuation of the system provided by financial analysts, with qualification). Essentially popular file sharing utilities enable content pooling. As is well known, the centralised directory look-up made Napster the subject of legal action, injunction and ultimately decline.

Nonetheless, Napser's legal woes generated the necessary publicity to encourage user adoption and for new competitors to enter the market and to innovate further. In the following section I describe some of the later generations of file sharing software and chart their innovations which have brought them into a space of competition with Akamai et al.

- Gnutella Original implementation has been credited to [Justin Frankel]? and [Tom Pepper]? from a programming division of AOL (then-recently purchased Nullsoft Inc.) in 2000. On March 14th, the program was made available for download on Nullsoft's servers. The source code was to be relased later, supposedly under the GPL license. The event was announced on Slashdot, and thousands downloaded the program that day. The next day, AOL stopped the availability of the program over legal concerns and restrained the Nullsoft division from doing any further work on the project. This did not stop Gnutella; after a few days the protocol had been reverse engineered and compatible open source clones started showing up. (from Wikipedia) The Gnutella network (Bearshare/Limewire) represents the first, which is decentralized client server application. This allows a much more robust network in the sense that connectivity is not dependent on the legal health of a single operator. A trade-off with this is inefficiency in the locating of files and the problem of free riding users, which actually impede the functionality of the system beyond simply failing to contribute material. Limewire addresses this problem to some degree by providing the option to refuse to download files to users who do not share a threshold number of files. Unfortunately this cannot attenuate the problem of inefficient searches per se, merely offering a disciplinary instrument to force users to contribute. In order to sharpen search capacities in the context of a problematic network design, these networks have taken recourse to nominating certain nodes as super-peers, by virtue of the large number of files they are serving themselves. While essentially efficacious, the consequence is to undermine the legal robustness of the network. The threat is made clear in a paper published last year by researchers at PARC Xerox that analyzed traffic patterns over the Gnutella network and found that one per cent of nodes were supplying over ninety per cent of the files. These users are vulnerable to criminal prosecution under the no electronic theft act and the digital millennium copyright act. The music industry has been reluctant to invoke this form of action thusfar, principally because of their confidence that the scaling problem of the Gnutella community reduces the potential commercial harm it can inflict. As super-peering etc. becomes more effective this may change

Incompatabilities between Gnutella supernodes/ultrapeers - Fast Track/Kazaa Similar systems are now been offered by these companies to commercial media distributors such as Cloudcast (Fasttrack) and Swarmcast, using technical devices to allow distributed downloads that automate transfer from other notes when one user logs off. The intention here is clearly the development of software based alternatives to the hardware offered by Akamai, the principle player in delivering accelerated downloads and used by CNN, Apple and ABC amongst others.

Edonkey/Overnet, Edonkey and Freenet distinguish themselves fro the other utilities by their use of hashing so as to identify and authenticate files. As data blocks are entered into a shared directory a hash block is generated (on which more below). Freenet introduced the idea of power-law searches into the p2p landscape, partially inspired by the speculation that the gnutella network would not scale due to a combination of its use of broadcast search model, the large number of users on low speed data connections, and the failure of many users to share. Edonkey became the first to poularise p2p weblinks and to employ the Multicast file Transfer Protocol so as to maximise downlaod speed by exploiting multiple sources simultaneously and allowing each user to become a source of data blocks as they were downloaded. In addition the Donkey allows the use of partial downlaods from other peers as part of the pool from the download is sourced, dramatically improving availability.

One drawback to the Edonkey is its proprietary character. Happily, recent months have seen the apparance of an open source donkey client called Mule.http://www.emule-project.net/ Mldonkey does something similar (http://www.infoanarchy.org/story/2002/8/7/45415/23698)

Freenet "Each node maintains its own local datastore which it makes available to the network for reading and writing, as well as a dynamic routing table containing addresses of other nodes and the keys that they are thought to hold." (Hong, Theodore (et al.) (2001). Freenet: A Distributed Anonymous Information Storage and Retrieval System. In Federrath, H. (ed.) Designing Privacy Enhancing Technologies: International Workshop on Design Issues in Anonymity and Unobservability, LNCS 2009. New York: Springer ) The information retained by Freenet nodes distinguishes it from the gnutell network. Given the presence of more information at query routing level less bandwidth is spent on redundant simultaneous searches. In addition, copies of documents requested are deposited at each hop in the return route to the requestor. Freenet effectively reproduces the caching mechanism of the web at a peer to peer level so as to respond to the actual demand on the network. If there are no furher requests for the document, it will eventually be replaced by other transient data. All locally stored data is encrypted and sourced through hash tables. Any node will maintain knowledge of is own hash tables and those of several other nodes.

Having overcome the need for scattershot searches Freenet theoretically manages bandwidth resources in a much more efficient manner than Gnutella. On the other hand, the importance allocted to maintaining anonymity through encryption detracts from its potential to become a mass insallation file-sharing program. Little surprse then, that in the las eighteen months Freenet's inventor, Ian Clarke, has founded a company called Uprizer that is porting the freenet design concept into the commercial arena whilst jettisoning the privacy/anonymity aspect. Specifically, in order to cloak the identity of the requestor the file is conveyed backwards through the same nodes that resolved the query, utilizing bandwidth unnecessarily in transit.

(ii) Connectivity Structure Traffic volum derived from search requests has become a significant problem. The Gnutella client Xolox for example, introducing a requery option to their search, produced what 3was described in Wired a low-level denial of service attack .(http://www.salon.com/tech/feature/2002/08/08/gnutella_developers/index.html) Search methods Centralised look up. The most efficient way to search decentralised and transint content is through a centralised directory lookup. Napster functioned in this way and conomised on bandwidth as a result. Alas it also left them vulonerable to litigation and it is safe to say that any p2p company providing suvc a service will have the same fate.

Broadcast. In this case queseries are sent to all nodes connected to the requestor. The queeries are then forward to nodes connected to those nodes. This leads to massive volume, often sufficient to saurte a dial-up vcpnnection. It is also extremely inefficient as the search continues even after a successful solution has been achieved. Most searches have a 'time to live' to limit the extent of the search, and where there are many weaker links thesearch can die without ever reaching large parts or even the majority of the network. This is the search method initially used by Gnutella. A bandwidthù-based tragedy of the commons effectvely obliged the creatio of super-peers to centralise knowledge about their local networks so as to have better look-up. Such a step is a deviation from the pure p2p model and raises the spectre of attractive litigation targets one again.

Milligram A 1967 experiment by Stanley Milligram on the structur of social networks yielded surprising results. A random sapmple of 160 people in the US Mid-Wes were asked to convey a letter to a stockbroker in Boston using only intermediairies known on a firs name basis. 42 of the letters arrived in a media of 5.5 hops, fully one third of the successfully delivered letters passed through the same shopkeeper. The evidence drawn from this experiment was that whilst most people's social networks are narrow and incestuous each group contains individuals who act as spokes to other groups. The conclusion drawn were dubbed the 'small world effect' for obvious reasons. Power/ law. Hess, Edonkey bots,

The struggle against these changes is not the traditional struggle between left and right or between conservative and liberal. To question assumptions about the scope of "property" is no to question property. I am fanatically pro-market, in the market's proper sphere. I don't doubt the important and valuable role played by the market in most, maybe just about all, contexts. This is not an argument about commerce versus something else. The innovation that I defend is commercial and non-commercial alike,; the arguments that I draw upon to defend it are as strongly tied to the Right as to the Left." at page 6

The future that I am describing is as important to commerce as to any other field of creativity. Though most distinguish innovation from creativity, or creativity from commerce, I do not. etc. at page 10.

I disagree with Jamie about James Boyle, and on multiple points. So I'm much happier that you brought up Larry Lessig. No-one can question the inexhaustible energy that he has poured into opposing IP expansionism, the Microsft monopoly and defending freedom of speech amidst the unfolding saga of law and politics attempt to adjust to the novelties generated by technological advance.

But then so have many rightwing libertarians, and many others whom I will not have any political association with. That didn't make their activiy any less wellcome, just as their opposition to state intrusion on civil liberties and scepticism of bellicose foreign adventures is welcome. The question is to what degree and in what way do I (or we) wish to interact with them. This has (and has had) a determining impact upon the ripost (or lack therof) to the outrages tht have occurred relentlessly in the area of intellectual property in the last decade.

Now given the collapse of politics, never mind 'the left', I can feel glad that someone did something, as those of a more social radical outlook have been either too up their own ass, too stuck in a certain ritualistic form of leftist militantism - both of action and thought - or just plain too dissociated to be able to act.

The citations above underline two of the central problems that I encounter with Lessig's work. As list-members are aware, the historical discourse of the commons was not fueled by the desire to sustain entrepreneurialism. Rather it was to fight against the pauperisation of millions so as to swell the riches of the few. Furthermore, by removing the material basis for subsistence (land for culivation) the 17-18th century rulers were determined to create 'hands', a labour force, that would man the manufacturing centres that were flourishing in urban areas. The choice was between starvation and wage-labour.

In modern western societies the impact of the IP enclosures is not a threat of starvation, although in the developing world it will be death through refusal of medication.

Nonetheless, an emancipatory possibility hads arisen for cultural and communication workers due to the precipitous descent in the price of the productive equipment required and the distribution mechanism potentially available. In this context, it is not surprising that the information aristocracy has targetted the primary matter, the creative or informational works, as the locus that must be fortified to perpetuate their domination.Otherwise media workers could simply work for themselves, as many attempt to do despite the the stranglehold exercised at licensing and distribution levels.

Let's leave aside Larry's romanticisation of the entrepreneurial figure, which deserves attention in its own right, and examine his vision of innovation. At no point does the impact of innovation upon the distribution of wealth merit scrutiny. Does this mean that all change is positive, every technological novelty an occasion to incant the marvellous nature of our modern age? Growing up in the shadow of th nuclear bomb, might we have one or two queries with regard to innvationb's effect onth environment? Rather than continuing down that path, I merely want to point out that at no point is the dogma of 'all progress, all the time' challeged. Paging Walter Benjamin, Is Walter Benjamin in the house....

Now Lessig was in a position to address some of these points, even in a passing manner, and he didn't. A huge number of people with an interest in technology politics listen to him, and what does say? He has explicitly rejected the proposition that in order to fight ip expansionism tech-activists should crosspolinate with social movements and dorect actionists. Apparently even the mousish EFF lack the necessary moderation:

"You are too extreme. You ought to be more mainstream." You know and I am with you. I think EFF is great. It's been the symbol. It's fought the battles. But you know, it's fought the battles in ways that sometimes need to be reformed. Help us. Don't help us by whining. Help us by writing on the check you send in, "Please be more mainstream." The check, right? This is the mentality you need to begin to adopt to change this battle. http://news.openflows.org/article.pl?sid=02/08/22/1937218&mode=thread

And that during a speech otherwise spent berating the audience for their failure to take poliical action? What's politics? Donating to Rick Boucher, delegating your struggles t the EFF (even if they are too militant), writing to Congress etc.

ow that is Lessig's position politically, and the supreme court appeal in Eldred this autumn represents the apex of his strategy. The US Constiution is to be the document to stop the copyright train, out of control and off the tracks, dead. I hope that they are successful but I'm not confident. And even if Sonny Bono is struck down, the communications conglomerates will find new mehods to shovel shit down our throats.

If you keep the code free you can do things with it, that affect the physical layer of the net, tha affect the content layer of the net, that affect the meaning of the net.... Everything's going to flow in the wires, just don't let thm notch up th resistance at the ends. Eben Moglen

Good `management' of the processes of knowledge consists of polarising them, of producing success and failure, of integrating legitimating knowledges and disqualifying illegitimate knowledges, that is, ones contrary to the reproduction of capital. It needs individuals who know what they are doing, but only up to a certain point. Capitalist `management' and a whole series of institutions (particularly of education) are trying to limit the usage of knowledges produced and transmitted. In the name of profitability and immediate results, they are prohibiting connections and relationships that could profoundly modify the structure of the field of knowledge.


Subscribe to RSS - blogs