New Civilization News - Category: Internet    
 Strong Elastic Links171 comments
picture 10 Jul 2010 @ 13:01, by ming. Internet
There's something fundamentally messed up about the way we store and use information. Most of our information connects really badly with related information, and with the stuff the information is about.

I've talked about that before, like here: Connected Information, so I'll try not to repeat myself. It is however, somewhat difficult to convey my point. I've tried writing and rewriting this as an article a couple of times, but left it unfinished. It still isn't coming out very clear, but I'll leave it at that.

I want information to be linked, by unbreakable elastic links, to what the information is about.

The type of links we know on the web are useful, way more useful than no links. But they're but a pathetic shadow of the type of links we potentially could have that truly would be useful and reliable.

I'm sure it is not only me who have found some interesting article on the net or in a magazine about something new and promising. Say, self-driving cars or super-efficient solar cells. And then, months later, when I try to search for information about how that project might be going now, there's no trace of it. Some journalist did some kind of investigative job and wrote about something. On the web it might even include some clickable links to more information, like another article or a company website. When I come back some months later, those might still be there, or they might not. It is quite likely those links would point to some frozen information from that same time period. What happened later might remain a mystery, unless I have the time and resources to do a fresh piece of detective work.

The links we use on the web are like addresses on an envelope that we put in a mailbox. They indicate some kind of coordinates for a recipient. "He's over there!" But he might not be. The address might have changed and become invalid, or it might now be occupied by somebody else who has no relation to the person I'm trying to reach. The links don't follow the target when it moves. Likewise, web links aren't very good at linking up real people or real subjects.

Part of the problem is that the web links are one-way pointers. They just point in the direction of some virtual place. That place doesn't easily know that they're being linked to, because there's no link the other way. So, even if they wanted to, they couldn't easily update others on the status of what they linked to. Even if they could, it would still be a cumbersome thing to do.

Links shouldn't just be some address. They should actually link the two things.

The reason you have problems with spam is because the contents of the e-mail messages you receive don't really link up with anything. There's an address for the sender and the recipient, and addresses for servers that have processed the e-mail. All of that can be arbitrarily made up by anybody, because the e-mail doesn't actually link to the sender and the recipient. It can say all sorts of stuff that isn't at all true, or it can say things that were true at some point, but which go out of date later.

Imagine that you could attach a link to something, and that link, without a doubt, would maintain the connection, no matter what.

For the moment, never mind how it could be done, but imagine that between all people, all groups, all subjects and all media about any of these things, between all of those there would be unbreakable links. Hard links, so to speak, or strong links, but elastic, as they will "stretch" to any length no matter how the nodes move around and transform.

You probably know what school you went to in a certain year. That school is a rather finite entity. It should not be a matter of archaeological detective work to retrieve the information of who the principal was, and what became of any of the teachers or any one of the students. The school was an unmistakable entity. It was there, very physically, it had buildings, it was paid for, it stayed there for a long time. The same with all the people who were there. Every single one was unmistakably a real, living, breathing person. There's really nothing fuzzy about it at all. But in accordance with the way we typically treat information, it has been saved in a very fuzzy manner. If you go search for your school in search engines, there is likely to be some doubt about what school you're talking about, and whether it even exists. It is going to be very hard to locate a list of teachers or a complete list of students, if one exists. The information was kept on pieces of paper, which might have been mislaid or lost or falsified, and maybe never digitized. Even if you found the list, you wouldn't know if it was the right one, and even if you did, it is only a list of names and maybe addresses and maybe a photo. Most of these people have moved, many of them have changed their names, some have died, etc. It would be a huge amount of work to track them down, and you'd probably have to give up on quite a few of them.

We've gotten so used to sloppy, unlinked information that we find it quite natural and normal that information gets lost or that it is hard to reconstruct or that nobody knows if it is true or not. We even find a certain comfort and security in all this fuzziness. There's no government that is sure how many people there are in the country it governs. And that's despite that they really want taxes from all of them, and they don't want illegal immigrants, and everybody needs an ID. And the subject matter, persons, is in no way vague. It isn't difficult to decide if somebody is a person or not. They're very finite and the number of people is finite.

The moment you commit information to little bits of paper and sloppy handwriting and filing cabinets and vague references to other storage places, the game is lost. The link between the information and what it is about is no longer there. It isn't much better if the same system is simulated with computers. Useful information can often be reconstructed, but there's nothing that guarantees that.

In the electronic world, we should by now be able to do much better. There's absolutely no reason to store our information in the same sloppy manner, lists of names and addresses in files that can be lost and falsified, or, worse, in free unstructured text form that also is stored in fairly random places, without real links to the subject matter.

What I'm asking for is, in part, two-way links, as one can pull the string from either side. But it is also unbreakable links, not just pointers. Not just signs that point in the general direction of the other piece of information. Rather, something like an electrical wire. The moment somebody cuts it, an alarm goes off. Or a quantum entanglement kind of mechanism, where you just can't mess with it without it being noticed.

How can one practically implement it? I didn't say I knew how, just that I want it, and that everything we do with information would totally change if we had reliable links. But it is not like it is an unsolvable problem. It would in no way be impossible to provide each living person with a unique encrypted ID code. There are certainly issues of politics and of privacy, and of identity theft, but they could be solved if there were any unified wish to have unique IDs. As it is now, it is in most places a no-brainer to acquire multiple ID numbers or to disappear to somewhere else.

The same applies to things, places, organized groups, subjects, etc. Information is as sloppily kept as for people, or more. A car at least has an ID number, but it is only used by government agencies, not for recording your photos or car trips or anything else.

A lot of stuff might deserve being very loosely joined, but not facts. A piece of information that is or could be a fact when recorded shouldn't later be a matter of searching and guessing. You should know its level of correctness by the way it is linked, not by some forensic text analysis.

Our shared information system has Alzheimer's. Real events instantly get converted into vague guesswork and conjecture and interpretation and stories and remixed soundbites. And then we expect to pour all of that stuff together, have a machine sort it all, and then we'll discover how really smart we are?

We'd probably get somewhere faster if we at least could keep most of the objective stuff straight, and then we could use our imagination and reasoning abilities for more important stuff than merely trying to reconstruct what is going on.  More >

 Call for Papers: (Online) Conference On Systemic Flaws and Solutions 200952 comments
picture13 Oct 2008 @ 14:42, by jhs. Internet
[Note, added Oct 18th: the Wiki for the Conference is at www.systemicflawsandsolutions.com
e-mail: systemic.flaws.and.solutions@gmail.com ]
---------------------------------------------------------------
In light of the recent events and in following plans I have had for a long time, I decided to go ahead with an online project that I first thought of in 1995 when Flemming founded the New Civilization Network and I installed its first Internet server at my then-home in the Hollywood Hills. Meanwhile much has changed, except for the systemic problems of our society. Flemming came up with better 'worlds', such as the 'Holoworld' [link] , I myself envisioned 'Freeland' [link] about which I also wrote in my book(s) the 'Logs of JD Flora' [link] .

I don't want to duplicate those visions of a better blueprint for society, they speak for themselves. Instead, I am looking at how to TRANSFORM the current society from a systemic viewpoint so that a new society grows into a new structure by itself, as living system.

Therefore I present here the setup of a collaborative, online book to which I give the introduction and first outline and to which I invite the readers of my BLOG to present papers that would fit into this scheme or adding to it.

The 'papers' should be as brief and concise as possible in order to be inserted as chapters into a book of the proceedings of maximal 500 pages to which the author would give the right to publication, if this should ever make sense and be feasible.

The project is officially organized (and sponsored) by Power Relations Ltda, yes a corporation, haha, [link]

The language would be English and I hope with the cooperation of GZ or Flemming to set up a Wiki and it should be possible also to make it multilingual if that is of interest.

The intention of this compilation is to outline SYSTEMIC FLAWS as opposed to MECHANISTIC (cartesian) ERRORS. The project is not about placing blame on anyone particular but to bring about the INSIGHT that only systemic changes can bring about a true shift and 'fixing the symptoms' doesn't help in the long run.

Here it is:

Call for Papers: (Online) Conference On Systemic Flaws and Solutions 2009
January 9th-14th, 2009

(the chapter in order of current priority, typically only the solutions are listed in this rough draft as the problems are visible implicitly)

Contents:  More >

 Static or dynamic web metaphors32 comments
picture 25 Oct 2007 @ 21:47, by ming. Internet
Anthony Judge: Transforming Static Websites into Mobile "Wizdomes" - enabling change through intertwining dynamic and configurative metaphors. Always interesting and challenging reading from Tony Judge.

The metaphors we employ to travel the web are extremely pervasive, but almost invisible to most. Same thing with how we use computers in general. I'm sure a lot of folks can't imagine anything different than their computer having a "desktop", even though that's a strangely antiquated metaphor to use. Here we have a mindblowing amount of computational power, and software that can deal with a hundred dimensions just as easily as two, and then we model the whole thing around a copy of our desk, with folders and pieces of paper and a trashcan. With many of the same limitations our desk has, which is exactly what we need to go beyond. Seems silly, but habit is strong, and often we can't see anything other than what we're presented with, and what we're used to seeing.

Here's from Tony's article, about "sites":
There is the interesting possibility that "site" may come to be understood as a static outmoded metaphor for the manner in which people and collectives find it appropriate to engage with the universe of knowledge. Site implies a particular location, especially the location with which the web user has some involvement and which may be deliberately constructed as an articulation of individual or collective identity. From there one can travel to other locations which others have configured to represent their's.

However, whilst the "site" may reflect considerable effort in articulating a static identity -- whether or not it has interactive facilities analogous to those that might be expected in a person's house -- it says nothing about the dynamics of how a person moves and how identity may be associated with that. There may be links to other sites -- like travel books in a home library -- but the dynamics and style of that movement are only partially represented. Even more interesting is the question of "who" moves. There is a sense that an abstract entity, a "visitor", travels to other sites as an observer, a consumer, a tourist -- along the information highway. Possibly some form of link may be brought back -- like a photograph or memento. Arrangements may be made to "keep in touch" through an exchange of addresses. As the person responsible for a site, one may in turn make arrangements to receive such visitors.

The question asked in what follows is whether more fruitful understanding of these processes would emerge from changing metaphor.
Hm, yeah. So, a *site* is kind of like a shrine one leaves behind, while one is out doing other things. It might have a bookcase with your favorite books, a collection of your writings, a picture of you, some of the things you like. Why not the teddybear from your childhood, a jar of your favorite peanutbutter, a wardrobe with your old clothes, and a TV playing your favorite shows?

There are organizations of various kinds that leave an office in their building standing ready for their long dead founder. It has a nice comfortable chair he would have liked, a box of his favorite cigars, or whatever it was he liked. And somebody will come by and clean it once per week, and make sure things look just right.

Is that really the kind of vibe we want in a website?

When we add more dimensions and more tools, people will often just create more of the same. I'm thinking of virtual worlds. One buys a plot of land in Second Life, builds a house, looking just like a house in the regular world, with pictures on the wall, books in the book case, music on the stereo, etc. OK, one has the opportunity of making something one couldn't do in real life, because it is would be either impossible or too expensive. One can have an avatar much more beautiful than one really is, and one can live in a mansion, and own a flying Ferrari. But it is still sort of the same thing. A somewhat static place that will represent what one wants to be thought to be, even while one isn't there most of the time.

Tony offers a bunch of possible alternative paradigms and related models and ideas. A whole bunch. One of the alternatives ways of looking at it:
Rather than constructing a site, and visiting other sites elsewhere in cyberspace, suppose the focus shifted to the "vehicle" in which one travelled. Such a shift in paradigm is evident in the case of people who choose to invest in a mobile home to travel their continent, possibly with little immediate intention to return to a particular physical location. The focus is then on the design of the mobile home (a caravan) and its capacity to move. The "centre of gravity" of identity is then with the vehicle and its enabling capacity rather than with some particular physical space. A similar shift in identity is evident in the desire of people to possess a vehicle that better reflects their sense of identity than the place they are obliged to dwell for socio-economic reasons.

But this possibility then raises the question of how exactly the design of a "vehicle" might be expected to be different from the design of a "site". In the design of a site, considerable effort is put into ensuring that it is a reflection of one's personal (or collective) sense of identity. The aim is to fruitfully distinguish its unique qualities from those of others -- notably to render it more attractive. Website designers now have considerable experience in building a site to this end -- respecting the basic needs of visitors -- navigational needs within the site, clarity of content, etc. If the site is a more personal one, holding notes, photographs and the like, less effort may be put into facilitating the experience of visitors and more into its security features -- exactly as with the priorities of a householder for whom the needs of visitors are not of major concern.

How then to think about the design of a "vehicle"? Clearly search engines may be appropriately considered as a form of "public transportation". They may even offer facilities to "personalize" the engagement with such transportation -- configuring colours, layout, language, skins, etc.
OK, so, yes, an avatar would be an example of that. You work on designing the part that's moving around, rather than the part that stays behind.

At the same time we're still stuck to some degree with the same metaphors that limits a mobile home to be as much as possible like one's static home, however much one can manage to stuff it into a box on wheels.

One can get very fancy in designing an avatar for a virtual world, but it is still within the realm of some kind of body, without necessarily getting any new perceptions or access to larger amounts of data or anything.

And I'll argue that more useful interfaces would be more in the direction of extrasensory perceptions and out of body experiences. I mean, instead of duplicating or merely enhancing what we do every day in the meat world, we might make a much bigger jump and imagine what we actually might be able to do if unburdened by the limitations of having to drag stuff through 3 dimensions, which takes time and effort.

In principle, the internet-connected information world would allow you to be anywhere instantly and have access to any amount of information in any way you'd want. Do you really need to "travel" to a "site" and read "documents" in order to get to it? Even if it isn't just that, every site has its own metaphors and paradigms and rules and procedures. You need to sign up, you need to figure out the menus, the different "rooms" of the house that somebody presents you with.

That's of course not all that is going on on the web. A lot of protocols and mechanisms are emerging that potentially allow us to access things in our own way, without having to learn the map of somebody's house. Feeds, APIs, etc. Potentially we have some of the building blocks for creating a drastically different experience.

Back to Tony's article. He proposes some sort of structure that you can take with you, which can replace the metaphor of a site. He calls it the "Wizdome". "Wiz" can be for wisdom, as opposed to knowledge. And "dome" because it maybe could be thought of as being spherical, or maybe geodesic.
Combining these two suggested shifts in metaphor -- to the spherical and to the dynamic -- the question for the individual is whether what is required is to design such a "wizdome" from the elements of knowledge accumulated on any current website. Can such knowledge elements be configured spherically in a fruitful manner for that individual? Can a site be "endomed" or "domified"? What kinds of insights and expertise are required to bring about any such "enwrapping" of knowledge -- beyond what the problematic aspects of cocooning? What is to be "encompassed" and how is this to be distinguished from any "encyclopedic" ambition...?

Additionally however, rather than a static dome, can such a wizdome be designed as a vehicle? Or, more intriguing, is it possible that its viability as a structure is specifically dependent on its movement as a dynamic structure -- as much a "whizdome" as a "wizdome"?

Also intriguing is the possibility that, to sustain its integrity as a dynamic structure, the wizdome may have to move in particular ways or to embody particular kinds of movement. It may indeed be capable of "whizzing" around.
Hm, maybe sort of like a merkeba, an interdimensional vehicle, often considered to be constructed of interlocking tetrahedra.

Some kind of vehicle to travel in on the interwebs might constitute progress.

There's me, and there's a whole lot of information out there, which I might want to interact with. I'd like to get beyond that each separate store house of information will build a house for me to visit to come look at it. And we're already halfway there. I read news in a feed aggregator, I choose my own e-mail programs and instant messenger programs. Although each of those have their own limitations, standing between me and what I'd like to do. I can sort of have these different tools at hand even while I travel around. I can chat in an IM program while looking at different websites, obviously. I can stay connected with a feed of messages from my friends on different computers, or on my mobile phone.

But to get further in terms of a different experience in dealling with the information world, is it still something like that the Semantic Web that is needed? That all available information is thoroughly labeled, measured, categorized, so that I could use some completely universal tools to access it in any way I want, rather than having to put up with a million different interfaces. And, since nobody is going to do it for us, will it emerge as a folksonomy?

Either way, some old structures will have to die out before all this inter-connectivity really can live up to its potential. The internet is still a little too much like a thousand channels with nothing on. Oh, there's a lot on, an there are interesting channels, but it is hard to find what you really want, and do with it what you'd really want to do. Because the metaphors are getting in the way.  More >

 The Tyee - Vancouver's Online Newspaper10 comments
picture28 Mar 2007 @ 05:36, by ov. Internet
The Tyee is Vancouver's online daily newspaper, that is an alternative to Canwest the neo-conservative mainstream. Vancouver has the biggest media monopoly in North America. The Tyee was started a few years ago, it has content that is applicable to local politics, and each article has a comments section. The site has some good comments along with the usual lefty righty squabbles. A couple of weeks ago I joined to meet up with local people and to make myself know. This blog post is a copy of my introduction, and a post on syncronicity and dialogue.  More >

 Response to Josep L.I. Ortega's Statement for Unity of Action4 comments
picture11 Jul 2006 @ 15:12, by mre. Internet
The world is experiencing a tremendous surge towards human unity. Necessity drives us and we are pulled by a vision of a friendly world that has its act together and does the smart thing. But we have a big hurdle to get over, and that is figuring out how to make "unity and diversity" really work. This is what Josep L.I. Ortega is addressing in his congenial "Statement for Unity of Action", which also, by the way, provides us an excellent map of the movement for world citizenship and democratic global governance. His main point is that the various trends within that movement need to support each other and unite on an agenda in an umbrella organization. Only in this way can the movement achieve the numbers and credibility it needs to succeed.  More >

 Squidoo lenses14 comments
picture 25 May 2006 @ 10:14, by silviamar. Internet
Some weeks ago I came across the lenses of Squidoo. I think it's a pretty interesting concept and I've set up two lenses there.  More >

 Web2.031 comments
8 Apr 2006 @ 23:44, by ming. Internet
"Web2.0" is one of the hot buzzwords right now. But a fuzzy term that a lot of people seem to dislike, because, well, it is a buzzword, and there's not wide agreement on what exactly it is, or whether it really is something new. But largely it has something to do with a new breed of websites that have more sophisticated user interfaces, particularly ones that use Ajax to update stuff on the page without having to reload it. And it has something to do with engaging large numbers of people in contributing content and in adding value to existing content. And it has something to do with web services, like RSS feeds. I.e. standardized ways one can access stuff, no matter where it comes from. And thus that new possibilities open for creating "mashups", i.e. new combinations of data from various sources. For example, Flickr is a photo sharing site, and it makes it easy for you to show those pictures in all sorts of settings other than their own site. GoogleMaps allow you to create maps based on their data, putting your own stuff on the maps.

Dion Hinchcliffe is one of the most articulate proponents for Web2.0, providing ongoing updates on his blog on where it is at. Like, see his recent State of the Web2.0. From there, a little overview of what it IS:
For those who don't follow it all the time, it might even be hard to remember what all the pieces of Web 2.0 are (and keep in mind, these elements are often reinforcing, so Web 2.0 is definitely not a random grab bag of concepts). Even compact definitions are sometimes a little hard to stomach or conceptualize But the one I like the best so far is Michael Platt's recent interpretation just before SPARK. Keep in mind, the shortest definition that works for me is that "Web 2.0 is made of people." However, it's so short that important details are missing and so here's a paraphrase of Platt's summary.

Key Aspects of Web 2.0:

- The Web and all its connected devices as one global platform of reusable services and data
- Data consumption and remixing from all sources, particularly user generated data
- Continuous and seamless update of software and data, often very rapidly
- Rich and interactive user interfaces
- Architecture of participation that encourages user contribution

I also wrote a review of the year's best Web 2.0 explanations a while back and it goes into these elements in more detail if you want it. But there's a lot more to Web 2.0 than these high level elements would indicate. A key aspect not mentioned here, though I cover it in Sixteen Ways to Think in Web 2.0, is the importance of user ownership of data. The centrality of the user as both a source of mass attention (over a hundred million people, probably 2 or 3 times that many, are online right now) and an irreplaceable source of highly valuable data, generally encourages that the user be handed control of the data they generate. If control over their own attention data is denied them, they will just go to those who will give them that control. This gives some insight into the implications of Web 2.0 concepts, which were mostly gathered by examining prevailing trends on the Web. Forrester is calling the resulting fall out of these changes Social Computing and it'll be interesting to see what the effects of the widepsread democratization of content and control will ultimately be a generation from now.
From the comments to Dion's blog posting, it is obvious that there's a lot of disagreement. About half of them seem to think that Web2.0 is a useless buzzword that just muddles everything. But some of them are also helpful with definitions. Nathan Derksen:
"Web 2.0 is comprised of applications that use sophisticated user interfaces, that use the Internet as an operating system, that connect people, and that encourage collaboration."
OK, that's simple and clear. Or, to give an idea of where it came from, from Varun Mathur:
On April 1st, 2004, Google launched GMail, which went on to ignite the whole Web 2.0 / AJAX revolution which we are witnessing right now. There is no agreed definition of Web 2.0. I like to think of it as the re-birth or second-coming of the web. The Web 2.0 websites are more like web applications, and have a rich, highly interactive and generally well designed user interface. They could also be using web services offered by other sites (for eg, Google Maps, Flickr photo web service, etc). Syndication and community are also associated with a site being Web 2.0. AJAX is the technical term which is responsible for the increased interactiveness of Web 2.0 websites. But the fundamentals remain the same - what's under the hood of a Web 2.0 application is as important as it was a few years ago.
OK, it seems that it is part Collective Intelligence and part more lively user interfaces. It is about creative engaging, immersive websites that form open communities. Not communities as in a member forum, which is something that has existed for a number of years. But community with less barriers and boundaries, where one rather freely can both contribute and consume lots of stuff in real time.

And, yes, maybe nothing very obviously new, but rather what the web was supposed to be all along. But in a more bottom-up and pragmatic kind of way. Being allowed to contribute and share more widely, and have somewhat uniform access to the contributions of many others, but without very many restrictions being imposed on y ou. An evolution, rather than a revolution. But it seems that collective intelligence becomes more visible and more a target, which changes things.  More >

 Agora and Antigora42 comments
10 Jan 2006 @ 22:55, by ming. Internet
Jaron Lanier: The Gory Antigora. A brilliant essay about the net. Like how we both find examples of the Agora, the ideal democratic collaborative sharing space, and what he calls the Antigora, where somebody mangages to set up huge, efficient profit-making machines built upon the ownership of their proprietary core. And how we in many ways seem to need both, and one builds on the other, in ways that sometimes are rather invisible.

He also laments how we lock ourselves into paradigms that aren't necessarily the best, but that become very stuck. You know, stuff like "files" and "desktops", and the ways we make software, which remains, as he calls it, "brittle". We still make software based on principles that mean it either works more or less 100% or it doesn't work at all. Which makes it all rather fragile, hard to change, and requiring lots of invisible unpaid work at the periphery to make it appear to be working. If you actually accounted for the work people spend in trying to keep their windows computers free of viruses, or trying to solve dumb problems with their software, it would add to up to being outrageously ridiculously expensive. Which it is. But it is still being used because a lot of people voluntarily make up for the gap between what it is supposed to do and what is actually going on.
There is no recognition for this effort, nor is there much individual latitude in how it is to be accomplished. In an Antigora, the participants at the periphery robotically engage in an enormous and undocumented amount of mandatory drudgery to keep the Antigora going. Digital systems as we know how to make them could not exist without this social order.

There is an important Thoreau-like question that inevitably comes up: What's the point? The common illusion that digital bits are free-standing entities, that would exist and remain functional even if there were no people around to use them, is unfortunate. It means that people are denied the epiphany that the edifice of the Net is precisely the generosity and warmth of humanity connecting with itself.

The most technically realistic appraisal of the Internet is also the most humanistic one. The Web is neither an emergent intelligence that transcends humanity, as some (like George Dyson) have claimed, nor a lifeless industrial machine. It is a conduit of expression between people.
And that is sort of the conclusion. It is really not about technology or economics, it is really all about culture and the playing of an infinite game.  More >

 Ruby on Rails33 comments
picture 14 Dec 2005 @ 15:15, by ming. Internet
Ruby on Rails 1.0 was released yesterday.

Grrr, I have too little time.

I had heard about Ruby and Ruby on Rails for quite some time, but didn't get around to looking more closely before recently.

Ruby is an elegant object-oriented language created in Japan some years ago by Yukihiro Matsumoto. I had looked at it several times, but however good it sounded, there really has to be an exceptional reason for changing the language one programs in. The biggest value is usually in knowing one's tools really well, as opposed to just changing everything whenever another language or platform comes along with slightly better features.

As far as the web is concerned, I first made programs in Perl, because that was basically the obvious choice in 1995. I did shopping cart programs, chat programs, and various other things. But Perl is just too damned cryptic, and I never felt overly comfortable with it.

Then PHP started happening, and it just seemed so much more practical to be able to embed code directly into webpages, and it was more clean and straightforward. So, I switched without much hesitation. Since then I've done hundreds of thousands of lines of PHP code, and PHP has grown into the most widespread solution for coding for webpages.

I've looked at other things in-between. Like Python. More of a "real" language, in that it makes it easier to make clean and well-structured programs that are easy to maintain. But that in itself wasn't enough to switch. But then I looked at Zope, which is a fabulous content management system and development framework, which makes a lot of hard things easier, and which is supported by loads of available modules. I was excited by that, and wanted to switch all my work to Zope. But after a couple of projects, I just felt kind of stupid. If I just used the pre-packaged modules, it was a piece of cake, but in developing new stuff, I just ended up not really grasping the "Zope Way". The people developing the core stuff are obviously super-smart, but so much so that I couldn't easily follow what they were talking about. So I ended up not going any further with that.

Now, Ruby on Rails is a framework built on top of Ruby. It could have been done in other languages, but Ruby lends itself very well to the purpose. It is developed, initially single-handedly, by David Heinemeier Hansson, a young Danish developer. Who is obviously also super-smart, but who additionally has a knack for making things extremely simple, and for just doing everything right. It supports the best practices for development, it supports most things that currently are cool and happening, like Ajax, it is well structured, easy to test, easy to deploy, etc. And with Ruby on Rails you don't pride yourself on how many lines of code you've written and how long it has taken, but quite the opposite. You'll brag about having written some major application in a few weeks, with just a few thousand lines of code.

Rails is built on a fixed, but flexible structure, or pattern, rather, called MVC. Model, View, Controller. The models keep track of the data, connect with databases, validate the data, and store the business rules for the data. The controllers receive requests from the users for doing different kinds of actions. The views are templates for what is presented to the user. That's nothing really new, but it is a good way of organizing an application. One could do that in PHP, but typically one doesn't. Now, Rails enforces a certain structure. There's a set of directories where these various things are stored, and there are certain naming conventions for what they're called. That some of these things are set and known in advance is in part what makes it very simple. A Rails principle is "Convention over Configuration". If we know that the model is always found in the models directory, and that it is named to correspond to the database table, and a few other conventions, we suddenly have little need for configuration files that define where things are and what goes with what.

Another basic principle is "Don't Repeat Yourself" (DRY). Meaning, one should really only do something once. If you have a rule that says that you can not order more than 10 of a given item, there should be one and only one place to say that. Most programmers would want to follow a rule like that, but in most systems it is hard to stay true to it in practice. Not so with Rails, as there typically already is one right place to store that item, so there's no doubt about it.

The online video demos for Rails are mind-blowing. You know, like write a simple weblog program in 15 minutes. If you just want to try Ruby itself, here's a great little interactive tutorial.

Well, I haven't gotten much further than installing Ruby and Rails on my machine and going through a few tutorials. But I'm very impressed, and I think this probably will be a way I'll go.

I'm an expert at PHP programming, and I've done a number of fairly impressive things. But it tends to end up being a bit of a mess. You can do a quick thing in PHP really, really quickly. But a complex program in PHP is very complex. And after you've done it, you discover that there isn't any very good way of testing it, and things break whenever you change something. And everybody does things a little differently, so if you get the job of changing something in somebody else's program, it usually looks like a big pile of spaghetti, however cleverly it might have been written.

I just spoke with one of the people from a company I worked with for several years, developing big things in PHP. I had wondered why I hadn't heard from them for a few months. Turned out that in the meantime they had converted their whole operation to Rails, and they are extremely happy with it, and everything was much easier. That's some folks with very high-volume websites and a few dozen servers. And no wonder they don't need me any more.

Luckily Ruby and Rains are so relatively simple that one can become an expert faster than in various other arenas. Oh, it is not a complete no-brainer, either. Rails can seem a bit intimidating at first. No graphical interface or anything. You're editing text files and running command-line utilities. The productivity mainly starts when one is fluent with all the basic pieces, and one intuitively knows where things go.

Anyway, the best places to learn are the two main bibles, which are lying right here next to me. Programming Ruby and Agile Web Development with Rails. You can read them online too, for that matter.

Ruby and Rails are often connected with "agile" or "pragmatic" programming. These are keywords for modern methods of fast and flexible development which are very different from the traditional slow and linear methods. You know, traditionally one would learn to develop software according to a certain Structured Development Life Cycle (SDLC) approach, which involves copious amounts of formal proposals, specifications, etc. You know, first a committee of people would do feasibility studies, then it would go to an analyst who would make models and specs, etc. And the programmers would be told what to do, essentially. And when they discover that it isn't a great idea to do it that way, or when, later, one discovers that it wasn't really what was needed, it is a bit cumbersome to change. The Agile, Pragmatic or Extreme approach would rather be to go very light on the specs and analysis, and get down to work ASAP, but to do it very quickly, with very short incremental phases, like daily updates, and to do it, as much as possible WITH the stakeholders who need the result. Like, preferably sit down with the end users, and change stuff and show them right away. One could theoretically do that with any language, like PHP, although it isn't easy, and one would probably be crazy to hope to do that with Java or C++. But if you're working with a framework that all the way through is geared towards working like that, it comes much more naturally.

Anyway, I could sure use a 10X productivity boost. And right now Rails looks like the most likely candidate in the programming arena. Plus, I want to be cool too.  More >

 Saving the net from the pipe owners47 comments
19 Nov 2005 @ 14:12, by ming. Internet
Doc Searls has an excellent and long article, "Saving the Net: How to Keep the Carriers from Flushing the Net Down the Tubes". Basically the net as we know it is in grave danger if the large telco carriers that own most of the pipes it runs on manage to get their way. You might think that the net is just this common free space where we all can share and communicate. But some very large companies think it is merely their property, and their delivery mechanism for their content that you'll have to pay for. And if we don't watch it, they might get their way, by getting their business plans put into law. Well, U.S. law at least, but that unfortunately sets the tone for how things work. This quote from Edward Whiteacre, CEO of SBC, epitomises the problem:
Q: How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?

A: How do you think they're going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. So there's going to have to be some mechanism for these people who use these pipes to pay for the portion they're using. Why should they be allowed to use my pipes?

The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!

He was asked this, as far as I remember, because SBC started blocking VoIP (voice conversations over the net) on their network. Which they think they have a right to do, because they don't make money off of them. Well, they do - people pay for their internet connection, but SBC also sells phone service, and they don't want competition. So, where we think we're free to do whatever it is technically possible to do on the net, these guys have in mind to only let you do things they get paid a cut of. I.e. they regard the net as the vehicle for big companies to deliver paid content to you, the consumer, rather than as your way to peruse the information you're interested in.

One of the many good points in Doc's article is that the battle is about semantics. Not "just semantics", but it is semantics in the sense that one side is somewhat succeeding in positioning the discussion to be about ownership and property. They "own" the pipes, the copyrights, the content, and everybody else are freeloades who'd want to rip it off for free. And law, particularly in the U.S., tends to be on the side of property owners. Here's from a previous article, particularly about copyright, discussing that beyond the legal and political contexts there is the metaphorical:
The third is metaphorical. I believe Hollywood won because they have successfully repositioned copyright as a property issue. In other words, they successfully urged the world to understand copyright in terms of property. Copyright = property may not be accurate in a strict legal sense, but it still makes common sense, even to the Supreme Court. Here's how Richard Bennett puts it:

The issue here isn't enumeration, or the ability of Congress to pass laws of national scope regarding copyright; the copyright power is clearly enumerated in the Constitution. The issue, at least for the conservative justices who sided with the majority, is more likely the protection of property rights. In order to argue against that, Lessig would have had to argue for a communal property right that was put at odds with the individual property right of the copyright holder, and even that would be thin skating at best. So the Supremes did the only possible thing with respect to property rights and the clearly enumerated power the Constitution gives Congress to protect copyright.

Watch the language. While the one side talks about licenses with verbs like copy, distribute, play, share and perform, the other side talks about rights with verbs like own, protect, safeguard, protect, secure, authorize, buy, sell, infringe, pirate, infringe, and steal.

This isn't just a battle of words. It's a battle of understandings. And understandings are framed by conceptual metaphors. We use them all the time without being the least bit aware of it. We talk about time in terms of money (save, waste, spend, gain, lose) and life in terms of travel (arrive, depart, speed up, slow down, get stuck), without realizing that we're speaking about one thing in terms of something quite different. As the cognitive linguists will tell you, this is not a bad thing. In fact, it's very much the way our minds work.

But if we want to change minds, we need to pay attention to exactly these kinds of details.

"The Commons" and "the public domain" might be legitimate concepts with deep and relevant histories, but they're too arcane to most of us. Eric Raymond has told me more than once that the Commons Thing kinda rubs him the wrong way. Communist and Commonist are just a little too close for comfort. Too social. Not private enough. He didn't say he was against it, but he did say it was a stretch. (Maybe he'll come in here and correct me or enlarge on his point.) For many other libertarians, however, the stretch goes too far. Same goes for conservatives who subscribe to the same metaphorical system in respect to property.

So the work we have cut out for us isn't just legal and political. It's conceptual. Until we find a way to win that one, we'll keep losing in Congress as well as the courts.

Doc wrote another excellent article with David Weinberger, called The World of Ends, essentially arguing that the net is a neutral medium that connects up a lot of ends. It is about connecting everything to everything, with zero distance. The things to connect might be us having a conversation, or it can be you making a purchase in a store. Doesn't really matter. The net itself is stupid, and just acts as the connecting substrate. It is a place, to connect up end users. And that's what works. If anybody succeeds in re-defining it as merely their distribution mechanism, which they can slice up and monetize like they feel like, it starts not working any longer.
While the Net's nature is a world-wide place, the Web's nature is a world-wide publishing system. The Web was invented by Tim Berners-Lee, a scientist who wanted a simple way documents could be published and read, anywhere in the world, without restriction by physical location or underlying transport system. That's why it has hypertext protocols, "languages" and "formatting" standards. It's also why we "write", "author" and "mark up" "documents" called "pages" and "files" which we "post", "publish" or "put up" so others can "index", "catalog" and "browse" them.

To sum up, the Net has all these natures:

1. transport system (pipes)
2. place (or world)
3. publishing system

--and others as well. But those aren't at war with one another, and that's what matters most.

Right now #1 is at war with #2 and #3, and that war isn't happening only in the media and in congressional hearing rooms. It's happening in our own heads. When we talk about "delivering content to consumers through the Net", rather than "selling products to customers on the Net", we take sides with #1 against #2. We unconsciously agree that the Net is just a piping system. We literally devolve: our lungs turn to gills, our legs turn into flippers, and we waddle back into the sea--where we are eaten by sharks.

Hopefully it doesn't end up happening. Law makers might see the light and not just hand ownership of the net to a few corporation. We might collectively understand the matter clearly enough to not allow it. And there might be technological solutions that take it in a different direction, and bypass the monopolies. Community wifi networks, for example.  More >



Page: 1 2 3 4 5   Older entries >>