Processing and Processing-JS on the BeagleBoard under Angstrom

After reading the blog post about the serial port challenges in Processing and the method for fixing them, I decided to download the IDE and try it myself.  It seems to work reasonably well.  I haven't quite managed to learn enough about the Arduino to understand how they use Processing or how limited their implementation of it is, but I was able to install and run the Arduino tool on my BeagleBoard after applying Koen's serial port library fix.  I hope to use the tool to download applications onto my Trainer-xM, but I haven't yet been able to compile the examples natively because I haven't yet built avr-gcc.

This is highly related to my current personal project, which is working with Processing-JS.  I've been talking a lot about a BeagleBoard Linux Education project, but haven't really kicked the project off after working with others on the scope.  Mark Yoder's ECE497 class is where the most visible advancement seems to be taking place right now, though I've heard of several other educators also creating courses with the BeagleBoard, including the Stanford New Musical Controllers workshop.

For me, I like the ideas of avoiding compilation, working in the most popular development environment today (the browser) and being able to remote my interface anywhere in the world over the web.  I also believe the zero-install nature of web applications and the familiarity of working within a browser make it the perfect environment for newbies.  To that end, I've made a little fork off of my Linux education project to focus on JavaScript-based development.  I now have both Cloud9 and Processing.js installing as submodules.  Under Angstrom, I'm able to easily run Chromium as one of my browser options and see the output from Processing.js, such as the picture here.  Now to see which environment gets me talking to my Trainer-xM board first.

Also, sorry about the delayed BeagleCast this week--resulting in some of the topics not being timely.  I've had a challenge getting a call scheduled with Khasim to discuss Rowboat.


Node.JS-based Cloud9 JavaScript IDE running on a BeagleBoard

Yesterday, I decided I was going to get Cloud9 running on my BeagleBoard. I didn't document every step, but below are a few helpful hints. I'm working with a recent Angstrom Distribution build with Node.JS v0.2.6.

At first, I tried using Node.JS v0.2.1 and I found that the 'connect' module wasn't present. I went to install 'npm', but the install attempts failed silently. When I found that v0.2.6 was in the feeds, I tried installing 'npm' again and it succeeded without much of a headache. I installed v0.2.19 of 'npm'.

Since the node-o3-xml package needs to be compiled for Cloud9, I installed a bunch of native tools onto my BeagleBoard, including 'opkg install task-sdk-native nodejs-dev', but I don't remember 100% of what I installed. My confusion began around an error on 'import Scripting' when running node-waf. Fortunately, someone already figured the issue out for me when they were trying to build node-inotify on a Gumstix board. I installed the extra Python tools and copied the .py code from my Mac. I did need to edit wscript to remove any x86 specific optimization flags and remove any .pyo files that accidentally got copied over.

Once I built o3, I installed it into my Cloud9 directory, which I checked out following the online directions. I am working from the Cloud9 0.2.0 tag. Since the submodules of Cloud9 include a set of pre-built binaries for o3, I added a repository that included my newly built ARM binaries.

I think this was pretty much it to getting Cloud9 invoked, but using the latest Firefox 4 beta as a client wasn't working. I tried using the '-d' flag at invocation and moving Cloud9 to a user account instead of root, but that didn't help. Based on a blog post that described invoking Cloud9 on an Ubuntu machine, I was using this command-line to perform the start-up:

node ~/cloud9/bin/cloud9.js -c ~/cloud9/config.js -w ~/testproject -d

I decided to install Chrome to see if it was a browser dependency and viola!

After a simple web server app, I decided to poke the LED SYSFS entries. I needed to change the SYSFS file entry permissions to 777 to enable my user account to set the state, but I was easily able to do so.

Next step is to show how JavaScript closures can be used to create a web page that responds quickly when the USER button is pressed, generating a Linux input event.

Am I the only one that gets how cool it will be to be able to distribute pre-configured SD cards you can drop into your BeagleBoard, plop it onto a network and start editing code to peek and poke hardware using an IDE without ever installing *anything*?


Google kills Blogger Web Comments

Often I find websites where people are just being stupid and need to be told so. In those cases, I don't think relying on the ignorant host of the site to provide a comment page to let me tell him how much of an idiot he is being will really work.

Then there are those cases where I'm wondering about what other people think who are interested in this same site that doesn't allow direct comments to be posted, or where I don't trust the host to not pull down negative comments.

So, what are my options?

Well, I used to make a lot of use of the Blogger Web Comments for Firefox. This was a pretty handy tool that would fetch comments using Google's Blog Search. Since I've recently upgraded to Firefox 3, I thought it was a good time to go look for an update to the plug-in and to see if I could get that functionality back.

Unfortunately, the plug-in is no longer available. This isn't the first time I ran into a brick wall with Blogger Web Comments for Firefox, but it seems they've decided to drop it, rather than fix it.

Hopefully others will still see the promise in this sort of functionality and provide something, but in the short term, I'll be stuck performing copy-paste operations and executing 3-5 clicks to get similar output manually from del.icio.us, Google blog search, and Technorati.

I'll be visiting those search options regularly to see if someone picks up on this feature.


Coining a phrase, the Contextual Web

I was getting started writing up a "master paper" to serve as a guideline for submissions to several conferences this year, including Lug Radio Live USA. In this paper, I planned to coin a phrase, "The Contextual Web". I figured, if I plan to coin a phrase, I should at least ask Google if anyone has tried to do that before me.

It turns out that someone has, they did it recently, and the synopsis looks eerily like the one I had written in some drafts. I'm not trying to claim that anyone stole my idea, or that I even had it significantly earlier than anyone else. To the contrary, I'm trying to claim that this idea is just that obvious. Here's a clip from the page I found when I did a Google search for "the contextual web":

The next generation of the web isn't going to be on your desktop, it may not even be on your mobile device. Context is going to be increasingly important and Nick will take you through the process of designing and architecting for context as well as regardless of the context.
Well, Nick Finck, you've got my attention. A few more searches with Nick's name in the search box return some additional gems:
There are four Elements of Context – the User, the Task, the Environment, and the Technology. Who is your user and what obstacles are they facing; what task are they trying to complete; what is the environment in which they are working; and what kind of computer or device are they using? Designing interactive experiences is not limited to the web on your computer or phone – consider gas pumps, fridges, or devices like Microsoft Surface.
This definitely puts my ego into perspective. Nick, I'm supporting the Beagle board just for you. :)


Adding a URL to 'gitweb'

It is as simple as creating a 'cloneurl' file in the git repository directory, just like you can add a 'description' file.

This took about 7 minutes of exploring the CGI code of gitweb to find, which took another 2 minutes to find. I spend about 20 exploring the web based on some links I was given that were 'supposed' to explain this, because this was the big feature that was missing from my gitweb installation. Ugh!

Come on Linux folks, are you just trying to make easy things difficult?

Example: http://www.beagleboard.org/gitweb/?p=beagleboard.org.git as sourced by http://www.beagleboard.org/beagleboard.org.git.


Making the connection between Gears, GreaseMonkey, JXTA, and OpenID

A while back, I wrote-up a "Collaborative GreaseMonkey" patent disclosure. It was a defensive measure to make sure no one else patented the idea and prevented the rest of us from using it. The disclosure never made it past our patent committee, and I think that is fine, since it is at least documented as prior art in some way. The code never got to the point where it was worth sharing, but I do plan to revive it at some point.

I'm seeing that more and more people are starting to get ideas that are more and more similar to what I had in mind. Today, I read about someone dreaming up thoughts on using Google Gears to perform OpenID and OAuth. I like the thought pattern.

Gears, GreaseMonkey, OpenID, and P2PSockets (JXTA) have the potential to re-invent the web and to establish a real web operating system. Gears enables the JavaScript written into web pages to become part of a real, persistent application with persistent data storage and threads. GreaseMonkey provides a solution to edit existing web applications with user-controled, local customizations and to create applications fully local, without needing to learn how to write a web server application. OpenID gives a single solution for authenticating yourself across those web applications. P2PSockets allows the applications and data you host locally to be discovered on the web without needing to own a web server.

The result is an application building environment that is an incremental step from simple HTML+JavaScript editing and allows everyone to invent their own web, rather than just rely on the web that the social networking sites control today.

The success of this web is, of course, controlled by the economy it creates. An a-la-carte business model, like the one provided by Amazon's web services, is a great way to ensure that the bandwidth and data storage necessary for the locally-hosted services to scale.



Open source on TI devices

I happen to like this article, TI targets Linux and open source with new OMAP chips, but I certainly have gotten the message "more patches, less powerpoints". We'll see over the next few months...