2006-12-31

Google: left hand meet right hand

This is a very common problem in the software industry, so I won't pick on Google too hard. My last post about Google's Firefox extensions didn't get any responses, so I thought I'd spend a couple minutes to find out if Google already had plans to fix the problem.

I found the announcement of the extension made by Glen Murphy about a year ago, but didn't find any follow-up announcements on that blog by Glen.

I checked the FAQ, nothing too useful there. There were support contacts; an e-mail address and a discussion group. The discussion group had relevant information:

If you just signed up for Blogger, you've got a Blogger Beta account. Google hasn't yet updated their extensions to be compatible with Blogger Beta. Once they do,
you'll be able to do whatever you should be able to do with the
extension. Until then, I'm sorry to say you're stuck with the Web
interface. :(
My complaint is that Blogger is no longer in beta. There is no mention of this issue on the Blogger bug list, at least that I could find. Before you take something out of beta, shouldn't you make sure the other products from your company actually work with it?

Well, I'm hopeful that Google has already figured out this is a problem and that is why they are talking about more features and fewer products. See interview and Time article.

powered by performancing firefox

2006-12-29

Google Blogger Web Comments Firefox extension is broken

I decided I would play with some more Firefox plugins useful for bloggers and found a pretty nice one from Google, where this blog is hosted. The plugin adds comments to websites by looking for blog entries that link to that URL. This is pretty cool and works fairly well, but when I tried to create a comment (blog entry) using the extension, I got a login error. I use the new version of Blogger, so that could be the issue. I guess Google has the same problems as all of the other large companies when it comes to having one group talk to another.

If they use their own tools, I'd expect them to find this post quickly and fix their extension!

technorati tags:, , ,

Blogged with Flock

2006-12-28

Quick note on the prematurely declared death of XML

I was initially taken aback when I read Douglas Crockford's blog post from a couple weeks ago titled "XML on the Web is Dead". He starts:

At the XML2006 conference in Boston last week, I heard a number of people proclaim that XML on the web is dead. XHTML is not going to replace HTML as the web's official markup language because it turns out that resilience is more useful than brittleness. And JSON is quickly displacing XML as the preferred encoding for data interchange.

I agree with the attack on XML for data interchange. What is required for data interchange is something that works and works easily. The data is consumed in JavaScript and JSON also solves the nasty little sandbox problem, so it works. No argument from me there.

As far as markup, well, this jostled me a bit more. Sure, a parser that expects only well-formed XHTML will be brittle, but that is no reason not to generate well-formed XHTML source. XHTML adds value to developers by providing namespaces and better character encoding support, right? When the XHTML is well-formed, the parsing can be done quite small and fast. When it is not well-formed, the tag soup can still generally be handled by the browser parser with some extra work. That may go against the (misguided?) aspirations for a parser that would reject such bad XHTML, but it faces the realities of compatibility in computing platforms.

When I mention namespaces as a feature here, I'm not considering it a great feature that some folks come up with XML schemas that the web browser doesn't know. I'm talking aboug SVG, MathML, and other schemas known to the browser. I hope I'm not in the minority thinking these are useful for many web developers.

Douglas went on to group the reactions of folks at the conference into three categories.

The third was "Don't say it like that." XML is in decline on the browser, but it still has a role on the server. If we go around saying it's dead, people might start looking for better alternatives.

Well, life would be boring if we weren't always looking for better alternatives. The point that browser-side programmers are looking for easier ways to exchange data, seems quite valid and I think the choice of JSON is exactly right. I think it is time for JavaScript to get some more serious usage on the server-side and there is lots to talk about there. Maybe calling XML-on-the-web dead isn't that bad a thing to say.

XML and RDF haven't delivered all the tools necessary to turn everyday web programmers into clerics of the Semantic Web. They have delivered enough to be useful, living components on the web, even if they aren't given those names. I'm curious to know if anyone has done a good study of XHTML usage as Google did for HTML usage last year. I'd still expect to find its use frequent and growing.

2006-12-26

Blogging with Flock

I'm a bit of a nervous person. When Flock asks me for all of the usernames and passwords for my various social networking tools, it doesn't make me comfortable. Since this blog is about web-based collaboration, trying out some of the newer tools would seem to be a requirement. Now that Blogger is no longer in beta and is supported by Flock, it seemed appropriate to hit the "Blog" button on the first article I saw:

What Blogging Platforms Does Flock Support?

A table of supported blogging platforms is detailed at: http://wiki.flock.com/index.php?title=Blog_compatibility.

This is my second attempt to create this post. The first one was eaten when Flock crashed.

technorati tags:, , ,

Blogged with Flock

Characteristics of the winning WebOS

I'm going to predict the future for you here. Hopefully, like any good prognosticator, I'll leave my definitions vague enough and my time-line open enough that I'm almost guaranteed to be correct. What I'm going to tell you is the look of the web operating system, or WOS, that will ultimately "win".

The primary purpose of a WOS is to convert web services into a commodity. This is really just argument by assertion, but here goes anyway.

DOS is to disk as WOS is to ___?
The point of a computer operating system is to abstract all of the "interesting" parts of a computer in a way that allows application programmers to make use of them. Think back. Disk Operating System, or DOS, existed to read applications off of disks and to allow those applications to read and write to disks. Without a disk operating system, application providers would need to partner up with both computer and disk drive providers to deliver an application suite. Some amount of application code would need to exist in the ROM of the computer just to read the rest of the application suite off the disk.

This is largely the state of the web today. A complete web application utilizes a wide variety of web services for distribution, authentication, session management, search, etc. To deliver a web application today, you need to deliver an entire web server, or fleet of web servers. You can partner with other businesses for site hosting, back-end databases, and storage. You can utilize relatively common components, such as with LAMP. Nevertheless, as it stands, it is impossible to define a complete and relatively complex web application that can be freely moved from one operating environment to another.

Many of the critical services needed by a web application can be hosted by the local desktop. I argue a WOS must include a reference server, though I don't know if that is the only solution. What I do know is that one desktop server doesn't scale well to applications that could have users across the entire web. It is therefore critical the application just as easily makes use of commodity web services. That is the benefit of using a WOS. It provides the abstraction to make an web application that can scale without modification.

The interactions we can have with the web are much more interesting than the interactions we can have with disks, but I think the analogy holds. There are even some important interactions with disks that current desktop operating systems don't support, but that is a another story. I even go so far as to say that using a WOS should allow you move your applications to an entirely different web, off of the Internet and onto a private LAN. It is only some subset of the web that can be truly abstracted in this way and many web applications will have dependencies that don't allow them to work independent of the Internet.

That subset of services that can be made independent of the Internet is what is important to a WOS.

Growing the market for web services
I don't mean to say that a WOS developer should focus only on the server-side. A WOS must provide an environment where those web services can be consumed by a sufficiently large market. In this context, it can be critically important to provide a rich set of client services and template applications to simplify application development. Some people seem to get this secondary objective of a WOS confused, such as the blogger quoted below.

But what is a WebOS (not to be confused with another definition of the term, see here), or a Webtop, anyway ? Here’s a simple definition: WebOS is a virtual operating system that runs in your web browser. More precisely, it’s a set of applications running in a web browser that together mimic, replace or largely supplement a desktop OS environment.
The "webtop" functionality is important in the simplicity it provides in creating web application, but I think Wikipedia's definition is closer to correct. I struggle, however, to see where a webtop alone will significantly help generate the next killer web application. Web browsers already support tabbing and there are plenty of AJAX frameworks for producing rich interfaces. While a webtop and set of example applications seems necessary to be recognized as a WebOS, do any of these webtops offer sufficient benefit to application developers to lock themselves into one of them?

How a WOS will generate cash
Consumers want everything to be free. That is, they want it to cost them so little effort or resource that they don't even know they are paying. This is where a WOS has some real potential. A WOS can turn all of those profitless web application companies into differentiated web service providers. How?--by putting a line between what is scarce and what is abundant.

Software is abundant. Given enough time, someone is going to write this WOS of the future and make it open source, so there is little hope of making money off of selling one in the long term. Though there is little money-making potential for someone making just the software for a WOS, there is significant value for the service, content, and hardware providers. The existence of a WOS can increase the number of their consumers and provide the infrastructure they need for revenue, such as micropayments. The most successful WOS will turn the scarce service and content resources into commodities, maximizing their availability.

At first, nervous service and content providers may be reluctant to feed such an ecosystem where they are on level ground with their competitors. They will wonder if they will be able to sustain their value. Eventually, they will learn that there is still value in their brand, their services, and their content. Why? Brands are scarce, because attention is finite and it takes attention to build confidence. Services require resources and expertise, and are therefore scarce. Despite the seeming abundance of user-generated content and the high-availability of copies of any particular content, there is a finite amount of content available to sustain the value of any given brand. Though new markets can be generated on top of an existing consumer base, the ability to generate content that is suitable to an existing target market is limited to a finite set of content providers. In other words, people will get bored watching home videos of cats climbing the curtains.

With the scarcity of services and content, the abundant WOS will simply be the catalyst for revenue. The hardware is a story for another day.

The predictions
To conclude, here are the top 10 characteristics I see for the winning WOS:
  1. It will be a standard, not a single implementation.
  2. It will be implemented with open source software at least once.
  3. It will be packaged with a web browser, like Flock.
  4. It will make transparent use of free (community-based), fee-based, subscription-based, or advertising-based web services when local services aren't sufficient.
  5. It will make use of open identity services, like OpenID.
  6. It will provide namespace and tunneling services, like Paper Airplane.
  7. It will provide media playback, publishing, and subscription services, like Democracy and Broadcast Machine.
  8. It will work on any network, including a network of a single computer.
  9. It will be accessible, isolating the application presentation such that the interface can be implemented independent from the core application.
  10. It will provide bandwidth aggregation, like BitTorrent.
(Update Jan 6: link to my other posts on web operating systems)

2006-12-21

Pimp my blog with FeedFlare

I'm not one of those people that is obsessed about having my own unique CSS, so you won't see me waste much time with that. As long as the format is readable and the HTML in the post feed looks okay, I'm happy.

What does interest me is finding people who get something out of my posts. I have a grand vision for the web, as many other people more influential than me do as well. My first few entries have been a bit longer than I'd like, but they are such a small step towards sharing my vision. It has been a challenge to keep them even as short as they are. Of course, having a day job helps. Nevertheless, I've noticed that a few people have managed to discover and read these entries. I'm hopeful that I'll be able to participate in discussions on these shared interests in the future.

Quickly catching myself up with some of the recent blogging tools, I found a couple of interesting posts. One was from Robert Scoble with a great interview with Google's Marc Lucovsky talking about adding AJAX Search to your blog. I could use that to tie in other blogs talking about the same topics. The idea is to make it easier for someone, mostly me, to participate on a certain range of ideas. Well, that isn't going to be trivial, so I will move on for now.

The other post was from Matt McAlister on a new Del.icio.us publisher API. I'm a Del.icio.us user who scans for other people that bookmark items like I do. I add those people to my network. Putting the information of how many people bookmark my posts directly on my blog will save me some cycles. The first place I found information on the API was on FeedBurner, where they have added use of the API to their FeedFlare service.

I created a FeedBurner feed for my blog. The next step was to add the FeedFlare to my blog itself. One blogger pointed out that FeedBurner provides you with information that doesn't go with Blogger Beta. Followed the advice, I went to the FeedBurner forums to pick up the required snippet.

Simple enough for me. Actually, it was pretty darn simple. Sure, I've left out all of the issues of now having multiple feeds, so I'll never be able to tell just how many subscribers I have. Sure, I've given my e-mail address to half the Earth by now to get access to one web service after another. No, I don't have any guarantees those guys will be providing that same service to me tomorrow, but why shouldn't they? Should I really be expecting anything more?

(Update 2006-12-23: I saw today Del.icio.us posted information on their site about the new API. I probably won't look at it much, since I've already got it going with FeedFlare.)

2006-12-19

Defining a WebOS API

It has been a couple of years since everyone came around to realize that the web is the most important emerging application platform. All hands are now on-deck to fight the resulting API war. Because I am an embedded software developer, you might think I wouldn't have much of a role to play in this war. I hope you'd be wrong.

What might seem simpler to some might be harder to understand for others
Back in the day when computers were hard to use, many of the non-engineers that I know would make use of computers by writing small programs. Those programs might print their names across the screen or play sequences of musical notes. On occasion, those programs would be borrowed from a magazine and might encompass an entire adventure or arcade game. I wouldn't necessarily call this programming, but it meant that they understood how to edit a source file, invoke the compiler or interpreter, and run an application they could actually dissect themselves. How many of your friends not involved in writing software for a living can do this today?

Those were also days when people made backups of their floppy disks. Sure, part of that was because they had personal experience with data loss, but they also would lose smaller bits of data at a time to learn this lesson. What's more, their computers would still work without any time lost to restore a backup to a hard drive. How many of your computer using friends create a backup of their data today, let alone a bootable image with the programs required for using that data?

See also:
Microsoft BASIC Wikipedia entry
TRS-80 Model I Level II BASIC page
Don't get me wrong, I dread the idea of turning on my computer and seeing a BASIC "Ready" prompt looking back at me. I hope you'll admit, if not reminisce, that prompt let you at least know what state your computer was in and what it was "Ready" to do. The really good news is web browsers have given much of that know-what-state information back to computer users through the beauty of a hyperlink or URL. Typing a URL into the address bar of your favorite browser typically results in something you expect.

Since running web applications is easy, programming them should be easy too. If we can stay a little focussed, we might just find that is possible.

What is an API?
In the embedded world, I get pretty frustrated with how often a function call is confused with an API. I can accept that a collection of function call definitions could constitute an API, but a good API is often much more than that. An API is everything an application programmer requires to make use of a bundle of software that was created by someone else. So, a documented set of function calls into a library could be an API, but so could documented interrupt traps, messages, or even URLs. What is important is that you can give enough information to the application programmer to make use of your software bundle.

In the web application and web services worlds, there are lots of possible starting points for defining an API, all with various benefits and pitfalls. On the web services side, there are SOAP, XML-RPC, and many more. On the web application client side, there are Mochikit, Dojo, DOM, GWT, and many more. On the web application server side, there are Zope, Zend, Ruby on Rails, Mediawiki, Twiki, SharePoint, Blogger, Drupal, and many, many more. Gather it up and there is a lot of room for confusion.

Fortunately, as complicated as all of the support layers have gotten, the fundamentals of how most of the web works have remained relatively consistent. Data is fetched from URLs using HTTP's GET method and manipulated using two or three other HTTP methods. This thinking has been captured in the representational state transfer architectural style, or REST, as described by Roy Fielding's doctoral dissertation. Given average computer user's familiarity with fetching data using URLs and providing input using on-line forms, there is some potential for creating an API that people can understand.

A web service or web operating system REST API is somewhat akin to a desktop operating system call or software trap. In a system call, the application prepares some aspects of the CPU state then executes a CPU instruction to enter a more privileged set of code defined by the operating system. It is an important feature to not make these calls transparent, because this is where potentially sensitive information is moved to the network. These HTTP transactions are abstracted network transactions that most computer users can understand, because they already make these transactions frequently.

An early entry
YouOS has created something of an API for applications running within their WebOS environment. This API includes web application client library functions and server functions. YouOS has communicated their goals in the YouOS manifesto.
YouOS strives to bring the web and traditional operating systems together to form a shared virtual computer. To you, it's one giant computer that you and your friends can work on. To us, it's all the servers, routers, software, bandwidth, and engineers to keep this grand experiment in collaborative computing running.
I agree this is a noble vision for a web operating system, but this isn't necessarily in the best interest of consumers. At the least, some off-line capability, such as that described for Scrybe, should be included. Ultimately, there is also little reason to pay for storage and bandwidth if your home ISP connection is sufficient. I love the Amazon web services that YouOS utilizes to provide the scalability for Internet-wide software deployments, but locking customers into a single source provider for YouFS isn't in a consumer's best interest. Big players won't see a reason to use the service defined by YouOS because they can invest in their own infrastructure and new developers can't start small enough, which is confined to their desktop.

Provide an open source server implementation and client library
One answer is to provide an implementation of the server that is hosted locally. By continuing to use the HTTP interface definition, the location of the service, primarily data storage and retrieval, can instantly be made irrelevant. Further, it defines the necessary interface for authentication and provides a sandbox for web applications. A new developer who learns this environment can be given most of the lessons necessary to deploy their application across the Internet.

A pitfall that must be avoided is the creation of too many abstractions in the client library that are potentially more leaky than the HTTP interface. The programmer should be kept aware of the HTTP transactions that need to occur to perform their desired tasks. By providing a local server, significant services can be kept out of the client library.

I'm not saying that the client library should be avoided all together, but it shouldn't be seen as the entry to the operating system. The client library is more akin to libc for C programs or the standard template library for C++ programs, but with less need for portability. As long as a web application makes only the HTTP transactions supported by the web operating system, it should run just fine, no matter what language or client library is used.

Use microformatted XHTML, not a new XML schema
What shouldn't be avoided is defining the inputs and outputs of those HTTP transactions to be extensible, forwards compatible, and backwards compatible. A simple approach to this is to make use of microformatted XHTML for all queries and results. The server should still support HTML for robustness, but only generate XHTML. XHTML is something that every major web browser already knows how to parse and present. Microformats extend XHTML in a way that doesn't break that compatibility.

One huge lift from using microformatted XHTML is the ability to provide a service like Amazon's AWS Zone directly on-top of the web operating system services, without needing to understand the syntax of WSDL and SOAP. This self-documenting style is exactly what is necessary for those new to programming. This allows for cut-and-paste style experimentation to produce results that can be expected, such as how many people initially learned to produce HTML documents.

What all should a WebOS do?
Certainly it should provide data management so that you can get to your data wherever you are. I'd say it should go a bit further and define the paradigms for protecting and sharing that data with other users. Sharing content, especially media, is an area where Parakey seems a little bit more focused than the other WebOS entries. A WebOS should help you not only get to your data from everywhere, but publish and synchronize it with other users. (See: RSS, JXTA, and SyncML.)

A WebOS might need to handle some tasks where the programming environment of a web browser cannot execute tasks quickly enough. I've done plenty of assembly language programming in my career, but it is almost always only needed in a very small part of even complex algorithms. For the vast majority of software, the environment should optimize for ease of development. Dynamic type checking is better than static. Automatic memory management is better than manual. Programmers should only need to focus on the problem at hand. If the problem is the need to express some control flow, then just think about the software techniques required for that. If the problem is to create a high-performance video codec, something I do frequently, then worry about the individual machine cycles in your tight loops. Don't make programmers think about all of the problems at once.

The case of media manipulation seems to me to be one of those areas that is simply too intense for coding in JavaScript with today's interpreters. This is an instance where a proprietary interface, Adobe's Flex, seems to have an advantage over existing open source options. Hopefully, this is an issue that Parakey's JUL will address.

Should it also provide window management? This seems to be a popular function of the early WebOS entries, but I don't necessarily see the point. Web browsers already support multiple windows or tabs, though the idea of bringing up a desktop on remote computers in a predefined state has some appeal to me. What should be done by a web operating system is to clarify and distinguish the roles of software modules along the lines of the model-view-controller, or simiarly useful, pattern. This would allow web applications written against a WebOS to be immediately more accessible.

Conclusion
Some say that programming will always be difficult. This may be true for highly-optimized, widely-deployed, well-structured, and cutting-edge applications, but the definition of programming is a bit of a moving target. I'd be happy to get more non-engineering people introduced to a programming environment where creating the "Hello World" application is truly trivial and creating the next YouTube is truly possible. That environment, where you can be self-taught and all of the tools are at your fingertips, should be among the goals for any WebOS application programmer's interface definition.

2006-12-13

Try migrating your wife's computer from WinXP to Mac OS X (Part 1)

It shouldn't come as a surprise to most of you that moving from WinXP to Mac OS X isn't as easy as Apple would like you to believe, no matter how much they try to blame that on Microsoft. Last weekend, I brought home a shiny new Intel-based iMac, the quiet, self-contained poster child for simple computing. One cord from the wall to the display, from there to the keyboard, and from there to the mouse; and for $60 more I could have eliminated those last two cords to the keyboard and mouse as well, at the expense of changing batteries once or twice a year. My wife tells me it took her a minute or so to figure out where the actual computer was and find the DVD drive slot. I mention that as a bit of a praise, actually, because us computer folks can really be pretty ignorant of just how confusing something like that can be for the completely uninitiated. Of course, it certainly wasn't trivial for me either.

Backing up the PC
I started by making a copy of her complete hard drive onto an external USB hard drive using Second Copy. About 50 files wouldn't copy. I looked at the log and I don't think any of them are important, but how am I supposed to really know? None of them looked like a data file name that my wife would have assigned, but I wanted to duplicate her PC settings to a virtual PC, so I also ran the Windows XP Files and Settings Transfer Wizard. Check out these notes from the helpful article on using the tool:

Don't wander too far away because the collection process occasionally turns up a file that can't be transferred, such as a .dat file, and asks how to proceed. Just click Ignore. After the collection process is complete, you'll get a list of those files. If there are a bunch of them, highlight and copy the list into Notepad and save it.
Sure enough, there were about 50 or so files that wouldn't copy again. Reading further down in those instructions for a better idea of what to do with those files yields a less than ideal response:
After the transfer is complete, you can copy over files that you want but the wizard couldn't transfer. Now your new computer is ready to go, and you didn't have to repeat the same configuration chores you performed to setup the old one.
Just why is it that these tools are written to solve half the problem and leave the rest to magic? I have no good idea if those files are really needed by any of the applications. I already tried to copy most of them with different tools. What magic should have happened so that I could copy these files now? No hint whatsoever is given on how I'm supposed to release the locks on those files. Most of you will say that those files probably weren't important, but how is that supposed to make Joe Average computer user feel, especially if one of them was important?

Initializing the Mac
Prompted by the Mac's migration assistant started by the initialization software, I chose to give myself a jump-start by importing my account and applications from my G4-based Mac Mini. This was relatively painless and taught me how to reboot my Mac Mini in Fireware target disk mode and introduced me to other boot key combinations. I was a bit nervous because most of my Mac Mini applications are not Universal. It seems that all of the programs in the Applications folder, such as Microsoft office, the ones that are actually "bundles", work fine, transparently running Rosetta.

Microsoft Office is one of the applications that got copied over, so I payed a look into if I'm contractually bound to purchase a second copy. Office ran fine on the new Intel-based iMac, but it seemed to detect when I was running one of the applications on my Mac Mini. It left me wondering how close big brother was watching. In function, this is actually pretty nice to keep honest people honest and doesn't bother me. In theory, however, I can imagine the folks at the EFF having some issues on how this affects your privacy.

The programs I had built and installed under "/usr/local" had not been copied, but Fink programs under "/sw" did get copied. The Fink install tool worked, but none of the installed applications worked. I did a quick 'file binary_executable' and read about OS X binary executable types. I also read Apple's description on what Rosetta is supposed to run. My methods were a bit unscientific. I quickly moved on to more urgent matters about which my wife would actually care.

Moving the Data
The most valuable bits of data on my wife's computer, as far as I know, are her photographs, her music for iTunes and iPod, her Microsoft Word documents, some drawings, Quicken data, and all of her Microsoft Office e-mail and contacts. The Apple site topic on data migration told me:
Tip: If you're moving files over manually, you'll save yourself some time down the road if you organize your files during the process from the get-go. For example, move your My Pictures photos from your PC to your Home folder's Pictures folder on your Mac, move your PC's My Music song files into the Music folder on your Mac, move your PC's My Videos files to the Movies folder on your Mac, move your text and PDF files to your Documents folder on the Mac, export contacts to vCards on your PC and import them into Address Book on your Mac, and so on.
Okay, this is a little bit helpful for the uninitiated, but I quickly found myself jumping forward without the proper amount of planning anyway. What this tip drastically fails to tell you is anything practical about how each of those tools organize the data once you've moved it over. The best example I have is what happened to the photos.

I copied her entire "My Pictures" folder onto the Mac into her Picture folder. Originally, she used her HP camera software to import an organize her photos, leaving each "roll" of photos it its own subdirectory names after the date the photos were imported. There were also several subdirectories that she had copied off of my machine that had names like 'triptoeurope2004'. Well, I figured, she's going to want some tool on this machine to manage all of her photos as well. I installed the HP software, but I figured that it would likely be a bit easier to support her if she went the pure-iLife route.

iPhoto didn't remove originals after performing the import, so I removed them myself. They take up way too much space, the single largest body of her data, to leave many copies on a single drive. This was probably not a great idea, since iPhoto put them all in one roll, and not in multiple albums either. Oh, well, I guess she needs something to do on this new computer, rather than just enjoy the organization that existed on the old computer.

On to the music. This has to be simple, since all those people are switching to Macs because of iTunes and iPods, right? Well, Apple certainly has thought about moving iPod users from a PC to a Mac:

Step 2: Empty your iPod

To make your switch as painless and efficient as possible, you should clear off your iPod. We recommend emptying it completely of all your music. This sounds like a radical step, but don’t worry. Your music already resides on your PC, so you’re not in danger of losing anything. (You’ll need to re-copy all your iTunes music onto the iPod anyway — more on that in a minute.)

What?? Forget that I just happened to not do things in this order, but would I really want to? Yes, I do get the fact that there is a copy on the PC. In my case, I have this fall-back plan where if my wife doesn't like the Mac, then I'll simply take it and leave her on the PC. Also, the photo data is bigger than what would fit on her iPod and little consideration seems to be given to that possibility. Doing a scrub-job on her iPod sounds like it could get me in a lot more hot water than I'm in already for simply proposing that she move to a Mac at all.

I wanted to move iTunes playlists, not just the music. The instructions left me a little nervous of what mess I'd need to clean up, but they did work fine.

Moving Mail
Buy this, buy that. Apple told me I needed to purchase yet more software to do the migration, if I really wanted it to be easy:
For easier moving, you might want to consider Move2Mac, a third-party application that makes the moving process easier. Not only will it move files from your PC to your Mac, it also transfers other items such as your email account settings and address book, Internet Explorer bookmarks, desktop backgrounds, dial-up Internet settings, and more.
After spending over $1000, what is another $50? $50 is what it is! If Apple didn't find it worth adding that to the cost of the iMac before I bought it, I don't see the point now. Well, I'm about to learn why it is worth $50 and Apple is stupid.

Microsoft has a nasty hold on your e-mail once it is in Outlook. I'd never thought about trying to get out of that trap using IMAP, but it seemed like a solid idea. If I wanted to move my mail simply with IMAP, I needed to create a .Mac account which provides an IMAP mail box, since my ISP doesn't provide IMAP. That solution has a recurring fee, so it is time to turn to the world of open source.

I initially got the idea to use IMAP and create an IMAP server by reading a blog entry by Paul J Lucas on configuring an IMAP server on a Mac Mini using Dovecot and only stumbled upon the similar Apple recommendation later. I had Fink installed, so I figured installing Dovecot should be simple. Going in loops several times, I never could find a Fink distribution that included Dovecot and I got eerie impression that Fink was dead based upon where the Fink FAQ sends you for mirror status:
Which yields:

Site Error

An error was encountered while publishing this resource.

Debugging Notice

Zope has encountered a problem publishing your object.

Cannot locate object at: http://www.uptime.at/uptime/status.html

Giving up on Fink, I went to DarwinPorts for Dovecot. This installation when happily along, but configuration was still a bit of a headache. I based my configuration file on Paul's blog, but I needed to go to the Dovecot Wiki and read the quick configuration page to figure out how to do the PAM setup.

Setting up Mail.app was relatively simple, but because I signed my own certificate, I regularly get a warning message that I need to fix at some point. The more frustrating part was moving over the contact list. I could have tried to export the contacts using vCards, but writing a program just seemed silly. I extracted the contacts from Outlook 2000 without any hang-ups. Problems came when I tried to import the created .CSV file into Mail.app.

The Mail.app import function was actually pretty intuitive, which is good given the small amount of instructions. I was able to figure out that "postcode" should be mapped to "zipcode" and just left off the third line of any mailing address, but I won't go into that here. The process was intuitive, but not necessarily simple or fast. When I finally hit the button to go ahead with the import, nothing happened. I was able to flip through all of the contact records fine, so I thought. There was that one point where it hung on me a long time and I ended up restarting the import. Okay, that happened about 3 times.

It turns out that the import tool was having a problem parsing the .CSV file. I didn't spend too much time trying to figure out if this was a file creation error from Outlook or an import error, but the result was an import that would hang and not present any error. Carriage return errors in CSV file caused Mail.app to not complete the import. Clicking OK over-and-over again didn't tell me why the entries weren't actually imported, even though the preview was fine. Eliminating the problem entry cleared the issue and her contacts were moved.

Maybe I should have at least spent $10 on Little Machines' O2M.

First impressions
When she looked at her new mail tool, she immediately noticed the lack of status bar. Sure, there is that spinning thing that she'll notice at some point, but there were so many little things moving from one application to another. It doesn't matter what anyone might say would be more "intuitive". What matters is that things are different now and change is bad. Peopleware has a great write-up on people's impressions of change, but I haven't found any on-line quotes I can place here.

The dock was another point of confusion. The confusion was similar to what you'd find when first starting using Windows as well, but she asked me a very relevant question: Why are there two places for things on the desktop, one on the bottom and and one on the side? I explain that one is the dock and the other is the desktop with desktop icons.

She used to have a taskbar that she could click to get between programs that are running. Now, I need to explain that she can use the dock for the same purpose, almost. It still seems quite foreign, and not in an adventurous way. Nothing in the interface gives her hints to the existence of expose, so I tell her about some of the function keys. A few notes on a post-it and that doesn't seem to be much of a problem right now.

Conclusion (for part 1)
I could go into many more details, but let me summarize this all in two words: computers suck. Sure, I've been using computers since the 1970's and I cannot imagine my life without them. I can bend them to my will and get all sorts of magic done; filling myself with that unique and utterly pointless pride that comes from getting something to work that maybe only one other person on the planet has seen. Yet, the task of moving data and applications from one computer to another is something any computer user will experience multiple times in their lives if they are blessed with longevity. My wife sees this all as some sort of inexplicable torture that could only be motivated by the most evil and twisted forms of geek pride. I can try to explain that this is all in some grand vision of making both of our lives ultimately easier, but I haven't found the parallel universe where that actually makes sense. Computers suck.

2006-12-12

Paper Airplane and Web Operating Systems

I've got a few posts in the works, but reading Brad Neuberg's blog post this morning on HyperScope has got me itchy to mention one of my pleas to the folks creating web operating systems. It was Brad's reference to Paper Airplane that has made me lose my patience. Paper Airplane is a JXTA-based project to allow anyone to serve up web content without a web server.

Paper Airplane is a Mozilla plugin that empowers people to easily create collaborative P2P web sites, without setting up servers or spending money. It does this by integrating a web server into the browser itself, including tools to create collaborative online communities that are stored on the machine. Paper Airplane Groups are stored locally on a user's machine. A peer-to-peer network is created between all of the Paper Airplane nodes that are running in order to resolve group names and reach normally unreachable peers due to firewalls or NAT devices.

Parts of Paper Airplane have been modularized into the P2P Sockets project, a reimplementation of standard Java sockets on top of Jxta and ports of standard web servers, servlet engines, etc. to run on top of a peer-to-peer network. P2P Sockets is at a 1.0 beta level, while Paper Airplane development is just beginning. Paper Airplane code will be posted to this site as it is developed.

See the demo screencast of Paper Airplane in action to get a quick overview.

Well, the Paper Airplane demo is starting to look pretty good. The bee in my bonnet is telling me to try to reach those Web OS folks, namely YouOS and Parakey, and make sure they aren't leaving this great research on the side of their efforts.

It must be among the key objectives of a Web OS to provide a programming layer, core set of services, and guided user interface paradigms. Decentralized hosting is fundamental among that set of services as it is completely necessary for privacy, reliability, and ease-of-use. To require centralized hosting, that is, to fail to provide for all users to be service and content providers, would be an devastating sin.

Expect more on this topic from me soon.

2006-12-06

The Future of Digital Media Includes Participation

[I originally wrote this article on August 30, 2006, before Google purchased YouTube, but I think it is worth sharing here.]

There is a huge push to provide the next generation of digital content distribution to consumers that is more responsive to their desires. Today, content is typically created by large production companies, like Disney, then distributed by cable and satellite TV carriers. In the move to IP set-top boxes (IPSTB), there is a notion of giving consumers more "on-demand" choices. Largely being missed is just how much control over available digital content consumers will have. New developments in online social networks are showing us that many consumers are interested and capable of producing content—and there is an audience.

The split between traditional media and a new field of participation media comes when the audience is given mechanisms to respond and engage.[4] The traditional path is to provide the audience with more choice, be it through cable and satelitte distribution, "on-demand" programming, or the forthcoming IP set-top boxes. Participation media, instead, gets the audience involved, providing opportunities for everyone to be a content creator, to distribute the content, and to present content in new venues. YouTube has gotten things started by allowing small video clips to be shared and rated on web pages. RSS media aggregators, such as FireAnt or Democracy TV, go a bit further by enabling subscription to full-length videos in a more decentralized fashion. These services are just the start in the creation of a new digital media infrastructure where the audience can reach entirely new levels of content targeted at their interests.

YouTube

YouTube has taken ownership of 43 percent of the online video market[1], is delivering 100 million videos everyday[2], and has an audience of up to about 20 million viewers, an increase of almost 400% in 6 months[3]. With this degree of success, YouTube is obviously providing something that viewers like. In similar fashion to many web site startups today, they're following a user-first formula for attracting viewers, giving the opportunity for the users to turn the site into something they want.

Anyone is welcome to upload a video on YouTube and then share the link with friends. Offensive or illegal videos may be removed from the site when complaints are received by the "flag as inappropriate" link below the video. Each publisher is allowed to keep each video shared privately amongst friends and family, or to make it public and seek to make it to the "most watched" video category. With relatively few restrictions, and so many content producers, the video library is vast and potentially difficult to navigate without some help.

To provide help navigating the content, YouTube collects many statistics on the videos uploaded and includes a one-to-five-star rating scheme. Using this rating information, the number of views of a particular video, comments made by viewers, and other information, YouTube is able to provide suggestions and categories of videos the viewer might find interesting. Tags are collections of words describing the topic of the video and it is quite easy to look for popular videos with similar tags that have been rated highly by other viewers. All of this data management makes it quick and easy to be entertained by the huge library of content.

I'd be a bit surprised if anyone couldn't find something on YouTube to keep entertained for weeks on end, but you might notice the current video quality level of YouTube is less than stellar. While some people are frustrated with the quality, it doesn't seem to be affecting the success of YouTube. While many viewers will pay for the quality of presentation provided by technology such as high-definition televisions, the large audience of YouTube shows that convenience, customization, and creativity, even at a minimal quality level, will bring viewers.

RSS Media Aggregators--Television for Participation Media

An RSS feed is the equivalent of a television broadcast tower for the Internet. RSS, or Really Simple Syndication, involves a web-link (URL) and specially formatted information describing the subject matter,author, when new content is available, and other information that might be used to determineinterest in the content. Most web browsers, such as Firefox, provide some support for RSS feeds, but do not natively handle all of the media types or provide significant automatic retrieval of content based upon the feeds.

An RSS media aggregator is a computer program that acts as the equivalent of a TV tuner and a TiVo, allowing a user to surf channels, find the content they desire, and automatically download the latest episodes without needing to manually explore web sites. Unlike traditional media, RSS feeds can be served from any computer on the Internet with a web server and there are many servers providing publishers these feeds for free. While today's RSS media aggregator programs and RSS feed servers can be somewhat easy to use and are centered around participation media, they solve a different set of problems than YouTube and don't yet provide all of the same features.

See the Josh Kinberg interview on XOLO TV.


Both FireAnt and Democracy TV are RSS media aggregators capable of utilizing BitTorrent to provide fast downloads of popular content, without the need to have powerful web servers involved as with YouTube. If RSS is like the television broadcast tower, BitTorrent is like the power generator for that tower. Utilizing BitTorrent, every computer used to download media content becomes a server for that same content. For every viewer who downloads a video, the download for every new viewer will be faster. The additional download capacity and decentralization provided by BitTorrent allows for higher quality videos to be served. Wikipedia refers to this technology as "broadcatching"[7] and the term connotes the many distributors to one consumer relationship[8] of these BitTorrent-enabled RSS media aggregators.

In addition to solving the bandwidth problem for publishers, broadcatching also enables revenue streams not available when using YouTube due to restrictions on posting advertisements.[9] Rocketboom, a popular video blog providing new content everyday Monday through Friday, chose to interact directly with advertisers and claims to reject product placement.[10] Instead, the blog creates ads directly for their advertisers, such as Earthlink.[11] This degree of control over content, quality, and availability, that can't be delivered on YouTube, will motivate publishers to look elsewhere when doing more than giving away random clips.Democracy TV has a separate publishing component called Broadcast Machine.[18] This software integrates with a web server running on a computer to create a web site for publishing media content. Since the publisher controls the web site, there are fewer restrictions on what content can be published, and what video quality level can be achieved, than with YouTube. Despite the existence of Broadcast Machine, publishing content using broadcatching is still more complex than the file upload feature of YouTube.

In addition to the added complexity of publishing, the ease of finding content suffers from a more diverse body of publishing practices. The tags on content are much less likely to be stored in a consistent manner or to be the same as related content. Statistics and ratings on content won't necessarily be stored in a single place and there are more reasons not to trust the data that is available. Attribution to the original author is also a serious issue for both YouTube and any broadcatching environment, but without a central resource to resolve disputes, broadcatching is even more susceptible to false claims of ownership.

Even with the publishing issues, the need to download new software to collect broadcatching feeds could be the biggest barrier to wide acceptance. Microsoft's next version of Internet Explorer will provide some native RSS support[12], but the functionality will likely only be similar to Firefox and performance will be limited compared to the open-source broadcatching aggregators already available. Concerns over copyright infringement will keep companies like Microsoft moving slowly towards broadcatching. Research Microsoft has published recently on a technology they call Avalanche[13], a competitor to BitTorrent, indicates they will eventually catch the open source community.

What is Next?

Fixing the software download issue for broadcatching is something that will certainly be solved. Lightweight Java, ActiveX controls, or browser plug-ins could simplify the software installation necessary to provide the desired experience. More adventurous solutions could seek to utilize the JavaScript capabilities in the latest browsers to build the functionality directly, without additional installations. Creation of such easy-to-use tools could evolve slowly in the open source community or could be accelerated by businesses or partnerships who can identify the potential gain on investment. Eventually, native support in the operating system or web browser will support the required protocols and interfaces.

An alternative to solving the software download problem on PCs is by including all of the necessary software in an embedded system. Embedded systems ship with software installed and could provide all of the necessary components for publishing and subscribing to content. There is already at least one home network router on the market today, the Asus WL-700gE, with BitTorrent included along with a hard disk drive for storing the downloaded content.[15] The current feature set is short of "YouTube-in-a-Box", but similar and additional functionality could be included in more complete embedded systems, including IP set-top boxes.

With all of this potential for involving everyone in participation media, it might be simple to lose an alternative lesson from the varied degrees of success of YouTube and the RSS+BitTorrent solutions: branding still matters. The video blog Rocketboom has begun to receive numerous mentions in the main stream media and provides some legitimacy to video blogs as a viable media.[16] YouTube has further delivered legitimacy to other forms of participation media and organized it in a way that is convenient and entertaining. Alternatively, FireAnt and Democracy TV have been around about as long as YouTube, but without the success. A quick search on PRNewswire shows 17 mentions of "YouTube" in August 2006[17], but searches for "FireAnt" or "Democracy+TV" didn't yield any useful results. Without some promise of creative quality associated with a recognized brand, it is unlikely any new media distribution venture could ultimately succeed.

References

2006-12-03

Finding a voice (the problem with blogs)

I've been quite hesitant to start a blog with any actual content. From the ones I've read, it seems most folks aren't nearly as paranoid as I am. It is absolutely nerve-racking to think about how much information about me is available on the Internet with very little effort. Can I avoid my slightest fear that someone is going to hunt me down and poison my dog because I accidentally offended them?

There is also the issue of intellectual property rights and proprietary information. Will I slip up and give away something that is really owned by my company, get fired, get sued, and make my company go bankrupt? Will I give away my billion dollar idea to some ninny who then corrupts it to kill every puppy in the country?

And what about the whole deal with using someone else's server to share these ideas? How do I know they won't burn to the ground just when Bill Gates was going to read my latest blog entry on why he should use his foundation to cure a disease that would otherwise kill every puppy on Earth?

Nevertheless, I've decided the risk of NOT starting a blog is too great to continue prevaricating around the bush. Living in ignorance of the value of my ideas, not being exposed to necessary feedback, isn't acceptable. Experiencing the problems of maintaining a blog is one of the best ways I can imagine to be a part of the solution. I hope that you'll join me.