K-means clustering with Processing.js

Share Button

K-means clustering is an algorithm to quickly group a large quantity of data. It’s used in variety of ways, from statistical analysis to improving usability of user interfaces. If you read Google News, you’re probably familiar with the way they group similar news items together. When I first saw that, I thought there must be some serious language processing and semantics behind that – that they somehow extract meaning from the articles and group them together accordingly.

Turns out that it’s a little easier than that. Article abstracts are split into words, and for each article, a multidimensional vector consisting of these words is constructed.  These vectors are then put into n-dimensional space, in which number of dimensions corresponds to the total number of different words in all articles analyzed. Then a clustering algorithm is run on the articles in this space. It yields groups of correlated articles based on their word vectors.

One clustering algorithm to do that is k-means. It works like that (I cite Manning’s excellent book “Algorithms of Intelligent Web” under fair use):

The k-means algorithm randomly picks k points that represent the initial centroids of the candidate clusters. Subsequently the distances between these centroids and each point of the set are calculated, and each point is assigned to the cluster with the minimum distance between the cluster centroid and the point. As a result of these assignments, the locations of the centroids for each cluster have now changed, so we reevaluate the new centroids until their locations stop changing. This particular algorithm for k-means is attributed to E.W. Forgy and to S.P. Lloyd, and has the following advantages:

  • It works well with many metrics.
  • It’s easy to derive versions of the algorithm that are executed in parallel—when
  • the data are divided into, say, N sets and each separate data set is clustered, in
  • parallel, on N different computational units.
  • It’s insensitive with respect to data ordering.

At this point you may wonder, what happens if the algorithm doesn’t stop? Don’t worry! It’s guaranteed that the iterations will stop in a finite number of steps. In practice, the algorithm converges quickly (that’s the mathematical jargon).

Of course you need to use Apache Mahout, preferably in combination with Solr, to do that on text in production environment. But it’s also possible to experiment in Processing, albeit for this experiment I w0n’t use text, but points in space. You can see this in the Processing.js applet below. It fills 3D space with random points, adds cluster nodes and then optimizes their positions so that every point belongs to a cluster, coloring the points belonging to each cluster in same color. Resulting configuration is in fact a Voronoi diagram in 3D, which is related to Delaunay triangulation.

Click anywhere on screen to restart, controls to refresh/stop, or select numbers of clusters and points. I suggest downloading the original sketch here, it runs much faster.

width=”800″ height=”600″>Your browser does not support the canvas tag.

City visualization in Processing with rudimentary traffic simulation

Share Button

Interactive traffic simulation made with Processing. GIS data of Ljubljana, Slovenia is read into RAM and converted into a vertex buffer object (VBO) with GLGraphics library.

Then a directed network graph is constructed from road data using JGraphT library. Cars are initialized, and a list of routes is generated with a Dijkstra shortest path algorithm. Then cars are assigned a random route and set on their way. When a car reaches destination, it’s removed from the list, and a new car is spawned.


Whole sketch with data is here for download. I used Eclipse and then transferred it to Processing, so there are pure Java classes in it.

You will probably have to run it in 32-bit Processing 1.5.1. It won’t work on Processing 2.0, because there have been significant changes with OpenGL. There’s also a possibility it won’t run on 64-bit Processing because of JGraphT library. I had one such report.

My other project with GlGraphics here.


Dance away on Eternal Dancefloors (interactive art project with Kinect)

Share Button

Do you like to dance? I do. So, one morning in Berlin, after a long partying night, I was going back home with a friend, and a morbid theme came up: why do we have to be buried or cremated after death, and not taxidermied, implanted with an robotic skeleton, animated with your own previously movements, and let to dance the eternity away?

Turns out there are some good and some strange reasons why the state won’t let you get taxidermied, even if you specifically requested it. Vsauce has a very good video about it on Youtube. Plus the idea of someone remodelled into a robotic puppet and then sold, resold, stored in an attic by embarrassed grandchildren, or even uploaded with a new animation, can soon get uncomfortable.

Nevertheless, we set out to create a foundation to do just that, if only to exist as an art project. The first phase is a system to capture motion of a visitor with a computer, store it in an accessible format, an visualize the whole thing as a big dancefloor, where the subject can dance with recordings of previous visitors. We’ll decide on following phases as we go, maybe Google will release a low-cost robotic skeleton someday. We can use it to convert ourselves and launch into space to dance eternally on Noordung‘s space station.

Some screen footage (sorry for low resolution, shot with a phone):

It’s essentially a motion-capture program that visualizes dancers’ motion on a virtual dancefloor and stores 3D-data for later use. For motion capture we used one Kinect, for 3D animation  Processing, and for rendering the excellent OpenGL library GLGraphics. Captured data is stored in a SOLR index to be searchable by dancer’s name.

The whole procedure goes like this: the visitor comes into Kinect’s field of view and is instantly recognized without need to strike any pose. Countdown to recording starts, and after ten seconds a ten-second clip of visitor’s movement is recorded, while the visitor can watch his movements on screen and synchronize movements with previously recorded dancers. It’s more fun to dance in company after all. Here’s the whole “workflow” in video. It’s choppy, but it’ll do.

The installation was premiered on Maribor Electronica Days in Maribor, Slovenia, on February 15th, 2013, sponsored by Kibla. Shown through house videographer’s lenses it looked like that:

video: Matej Kristovič, shown at: Festival MED in Maribor organized by ACE KIBLA.

The project in original form was shortlisted for Robots and Avatars  last year, but we unfortunately didn’t win. The name was a little bit more convoluted, I think Eternal Danceflooors is better than 1st Stage Preparations for a Taxidermic Afterlife Party, as it was then titled.

There was a lot of big talk in project documentation. Read this if you can:

‘1st Stage Preparations for a Taxidermic Afterlife Party’ is a part of a planned wider ‘Taxidermic Afterlife Party’ project, which is firstly addressing the problem of the disappearing intergenerational solidarity through the creation of taxidermic dancing afterlife avatars.

As a conceptual starting-point we take the present situation, where society’s mechanisms are less and less able to provide for it’s older – i.e., “non-functional” – members. As a response to this phenomenon we strive to establish an absurd dystopian vision of a situation that has gone out of hands, where we have got real physical avatars with no reasonable purpose, but they do not want to go away (are present after individual’s life) and on top of this also need to be up kept (because we deal with real prepared human bodies – containing a dance mechanism – that need to go dancing / clubbing, as they function on the basis of the Tamagotchi principle).

The artists themselves are of course submitting their bodies to this artistic project as an act of social comment.

Because of the fact that in our society you have got only three options of what can be done with your body (burial, cremation or liquefaction), one of the aims of this 1st stage is the assertion of the right to get prepared after death. Individuals that are taking part in this project are also signers of this claim (although you can take part and not sign the claim and vice versa). This whole vision might be dystopian in its core, but there is also something romantic in dancing just a little bit longer …

In the history of human kind, dance is one of the oldest forms of expression, social interaction and establishment of collective identity; it was a part of first rituals, also meant to change each individual neural activity in order to reach this state of collective identity. Vanishing of this phenomenon or its limitation to club environment in today’s society on one hand, and flourishing use of social networks on the other, makes it interesting to put this ”primitive praxis” (dance) in the context of new technologies (virtual environment).

Stages of the whole lifelong and afterlife project:
– 1st: establishment of the dance moves database with visualization and interaction platform and functionality for asserting the right to get prepared after life via a petition
– 2nd: getting in touch with competent and/or suitable institutions (e. g., cyborg foundations) resulting in actual preparation
– 3rd: taxidermic afterlife party: embodiment of recorded database by actualization in a robotic platform
– 4th (“sad-but-true” future vision): you / your avatar will probably get sold on eBay, stored in some dusty garage, your dance moves are going to be hacked to sadly entertain the owner’s drunken friends … But no one is saying that the first exemplar is not going to end up in Guggenheim.

We have a process here where the dancing human body is substituted by a digital representation (caught with motion capture) and later on the digital representation gets substituted again by the real body (prepared body with an implanted robotic mechanism). The whole project is resulting then in reversing the process where we establish an avatar through the omission of the real body and make our own personality avatar’s content – now this at one point “abandoned” dimension (i.e., the real body) becomes the avatar …

The 1st stage of the project includes an interactive installation, where individuals record their dance moves through the usage of motion capture, and the development of an online virtual environment. This interauthorship (individuals contributing to the database of dance moves) can be seen as an investment into individual’s future presence and also as a contribution to the future presence of others, as the project is based on the creative commons principle. The database can be understood as a prospect for your own and others’ afterlife presence, but also as a part of the responsive environment, in which individuals enroll and take an active part in it in this lifetime. People would be called to get their digital dancing avatars through announcements / appeals in mass media.

The 1st stage can be interpreted as a project in itself with following outputs:
– (world’s largest) database of freestyle party dance moves, including moves by professional dancers and supporters of the project
– online virtual environment, i.e., visualization and interaction platform for recorded dance moves
– a base of exclusive music sets contributed by well-known artists
– a formal claim for a right to get prepared after death.

I hope you enjoyed the videos.

There is another one, shot in development phase:

Project authors:

Pina Gabrijan (concept and organization)

Marko Plahuta (concept, programming and art)