Could computers vote instead of parliamentary representatives?

Share Button

Haha, what a funny question. Of course they can’t. How can one teach a computer all the intricacies of lawmaking process, and trust it well enough to let it vote? This must surely be a recipe for disaster.

Yet, as I realized in previous research, the parties mostly demand ruthless discipline from their parliamentary representatives at voting time, simply to be able to actually govern in Slovenian multiparty democracy, where there’s never an absolute winner. This leads to coalition governments, where every vote counts towards a majority.

That means that in a polarized parliament, one could theoretically predict a representative’s vote by examining the votes cast by all other representatives. If an opposition party proposes to vote on an act, it’s very likely that members of government block will uniformly, or at least predominantly, vote against it, and vice versa. There are few exceptions to that rule, namely some profoundly ethical decisions, in which majority parties will let their members vote by conscience. But they are few and far apart.

Fun with neural networks

I decided to test this out by modeling some representatives by neural networks, and training the networks with a few voting sessions and their outcomes in the beginning of the parliamentary term.

Model for each representative was fed votes by every other rep except him- or herself as input, and his or her vote as desired output. This was repeated and repeated again for all hundred training sessions, until the model converged (loss fell under 0.05).

It was then shown voting sessions iz hasn’t seen yet, and tasked to predict the outcomes.

The results are shown in images below. For each representative, the image contains:

  • name and party,
  • training vector (the votes he/she cast in first 100 voting sessions – red for “against”, blue for “in favor”, yellow for absence for whatever reason),
  • actual votes (400 votes the network hasn’t seen and was trying to predict),
  • predicted votes (how the neural network thought the representative would vote), and
  • difference indicator (with red rectangles for wrong prediction, green rectangles for correct prediction, and yellow rectangles for absence)

I didn’t bother too much with statistics, to see who was the most predictable, neither did I try to predict voting for every rep.

In short, those with the mainly green bottom strip were the most predictable.

Government coalition




predict_blazic_srecko predict_brglez_miran predict_dekleva_erika predict_kolesa_anita predict_lay_franc predict_zorman_brankopredict_vervega_vesna





predict_novak_ljudmila predict_tonin_matej





predict_horvat_jozef predict_goncz_laszlo


predict_jansa_janez predict_bah_zibert_anjapredict_cus_andrejpredict_breznik_francpredict_godec_jelka predict_mahnic_zanpredict_podkrajsek_bojanpredict_pojbic_marijanpredict_sircelj_andrej





predict_mesec_luka  predict_moderndorfer_janipredict_vilfan_peter

A cursory examination of results yields several realizations:

  • even in best predictions with lowest error rate, the model doesn’t predict absences well, especially for representatives with low incidence of absence in training data. This is intuitively understandable on two levels: first, it’s hard for the network to generalize something it didn’t observe, and second, absences can happen on a human whim, which is unreachable for a mathematical model. For representatives of opposition parties, who frequently engage in obstruction as a valid tactics, the model fares a little better.
  • the model predicts best the voting behavior of majjority party (SMC) members.
  • the model utterly fails to predict anything for representatives whowere absent in training period (duh).

So, could we substitute the actual representatives with simple neural networks? Not with this methodology. The problem is that we need votes of everyone else in the same session to predict the vote of modeled rep, so at the time of prediction, we already have their vote. We don’t have a way of inferring votes from scratch, or from previous votes.

We could, in theory, try to predict each rep’s vote independently from others by training the network on proposed acts’ texts. I speculate that a deeper network could correlate vectorized keywords in training texts with voting outcomes, and then be able to predict voting for each rep independently based on previous unseen texts. Maybe I’ll do that when I get the texts and learn a bit more. It’s still ANN 101 period for me.

I used a simple perceptron with 98 inputs (there have been 99 representatives in this term, counting also current ministers and substitutes), a hidden layer of 60 neurons, and a softmax classifier on the end.

As usual, I used Karpathy’s convnetjs for modeling, and d3 for visualization. Dataset comes from Zakonodajni monitor.

Grouping entries in Slovenian Wikipedia by contributors

Share Button

Some time ago, I helped Miha Mazzini extract some data from Slovenian Wikipedia. For that, I needed to write a comprehensive parser, extracting not only titles and text, but also number of overall and per-contributor revisions, along with contributor usernames.

So, for each entry, I got a list of contributing accounts and number of edits that were performed by that account. I wondered: how are the areas of expertise distributed among all those contributors? Are some of them specialized mainly for science, some others for politics, and so on? And, perhaps more interestingly, where do these areas overlap? Do we have experts for sport, who also happen to curate political entries?

To find out, I extracted all entries with more than 25 edits, vectorized them by contributing accounts, and ran t-SNE to cluster them spatially and prepare the visualization. When t-SNE layout was complete, k-means clustering was run on the x,y coordinates only to be able to distinguish those areas by color. It must be said that these colored groups don’t always coincide with semantic grouping, so take it with a grain of salt. It’s there mainly to make the map look better and to improve legibility. Font size is in proportion with number of revisions that an entry had so far.

Here’s the entire map as one big 9000 x 6000 image. Click the image to display, then zoom into it with mouse. You can find cropped clusters and some commentary below.

Wikipedia entries by contributing authors
Wikipedia entries by contributing authors – entire map (click for enlargement)

Turns out the areas of expertise are pretty well delineated.

Let’s look at some clusters. Here we have some geographic Wikipedia entries, mainly countries and some historical persons. That sounds logical – editing an entry about a great ruler probably causes one to contribute to an entry about his or her country.


Here are some famous Slovenian people, mostly writers, intermixed with some towns. In the lower left quadrant, there are alo some Slovenian politicians. It sounds funny that the late Communist ruler Josip Broz Tito is so close to Janez Janša, who is the current leader of right-wing opposition. It appears that there is a number of people who edit both entries. I wonder why. Here’s an article by Miles Mathis about editing of Wikipedia. I don’t know what to think of it, but I surely read a lot about autocratic rule of (English) Wikipedia editors to give it some credence. I don’t know about Slovenian version, and I don’t want to speculate, but this is as good an opportunity as any to start thinking about it.


Here is a cluster of lists. It seems that there exists an entire group of people who curate them, regardlessly of their content.


Here’s a funny cluster dealing mostly with public transit in Slovenia. It almost seems that there are some bureaucrats in the government that edit these entries on taxpayer dime. I could probably find that out, if I traced the IPs in the edit logs. If someone hires me as an Internet detective, I might do that, but I made these pictures for fun.


This is an interesting cluster. It appears that many same people edit entries about Euroviviion Contest, parliamentary elections, World cup in basketball and World cups in skiing, along with two new parties in Slovenian parliament: ZAAB, which is the remainder of majority party in the last parliamentary term, and SMC, which is a new majority party. Both parties, along with Pozitivna Slovenija (former majority party)  were founded hastily right before elections, and won them by a big landslide. I wonder how would political analysts comment on their (speculatively) members’ love for sports contests. wiki_politicians

Here we have many religious personalities, mostly many popes.wiki_popes

A grouping of entries about Slovenian popular music.


Some entries about historical scientists and natural sciences.


More geographical entries, along with some entries about Slovenian highways.


Here’s the center of the map. It would follow logic that entries with many non-specialized contributors are drawn towards it. It’s generally more chaotic that the outskirts, but here are great many contemporary and historical art personalities grouped together..


Another snapshot from the center, mostly consisting of entries about worker’s rights and things related to work.


So here it is. For the technically minded – everything was done in JavaScript with Andrej Karpathy‘s tsnejs library, clusterfck for k-means, and d3 for drawing.

I also made an inverted map, on which the contributors were shown, grouped by the entries they made revisions to. It’s not so interesting for general public, but if someone wants to see it, it’s available by request.

Similarities between representatives in Slovenian parliament

Share Button

The title should actually be “An exploration of dimensionality reduction techniques on voting dataset from Slovenian parliament”.

I’ve long been procrastinating with proper and comprehensive study of various machine learning techniques, especially those related to neural networks. I feel I made a few baby steps towards that goal with this research, which is actually a writeup of a project I made for a local newspaper in collaboration with excellent designer Aljaž Vindiš (follow him on Twitter).

The dataset comes from another project that I’m collaborating on with Transparency International Slovenia and Institut Jožef Stefan. Zakonodajni monitor is a platform for inspecting the legislative process and for following the activity of parliamentary representatives, intended mostly for journalists and researchers. Among other things, it contains records of every vote cast in parliamentary sessions by every representative, which is then used for various statistics and visualizations. It also has an API for public access to that data, although I have it in a local database too, making it somewhat easier for me to explore it.

This project is an attempt to visualize relationships between representatives and parties in two dimensional space, or on a line, to better understand the dynamics of power in Slovenian politics. It’s a part of my ongoing collaboration with Dnevnik newspaper for data analysis and visualization. Since the project was not supposed to be interactive from the start, one important constraint was that the results should be fit for a paper version of the newspaper.


Each representative has a great many properties in the database, but among them is a vote vector, containing a record of her or his votes so far. A “yea” vote is 1, “nay” is -1, and abstinence for whatever reason is 0. At the time of this project, there were a little less than 650 votings in this parliamentary term, so input data for each representative was a vector with approximately 650 dimensions. Our objective was to construct one- or twodimensional visualization, which would hopefully confirm our existing knowledge about alliances between parties and individuals in the parliament, and, if possible, reveal new and interesting information.

To effectively communicate this information, we had to employ some dimensionality reduction techniques, of which we tried three:

  • PCA (principal components analysis),
  • autoencoder,
  • t-SNE

In the end, we decided on t-SNE because it’s fast and convenient, but other two methods, with the exception of PCA in two dimensions, gave very similar results.

What is “dimensionality reduction”, you might ask? It’s a set of techniques to make sense of complex data. A shadow is a simple natural reduction technique, because it’s a projection from three dimensional space into two. Going on with this analogy, if you want to recognize a person from its shadow, the position of the sun matters a great deal. For example, sun directly over person’s head doesn’t give us much information about the person’s shape. It’s necessary to find a proper angle.

These various techniques have much to to with proper positioning of the “sun” in relation to data, to retain maximum possible amount of information in the projection. Of course, if you project from 650 dimensions into one, a lot of information is lost. Also, in many cases it’s not immediately clear what is the exact meaning of the axes in the projection. Read on, I’ll try to elaborate below.


We started with an autoencoder. An autoencoder is a form of artificial neural network that is often used for dimensionality reduction. It is a deep neural network with many layers that essentially tries to teach itself identity, that is to say, it’s trained to generalize patterns in data in by compressing the knowledge in some way, and then recreating it. We used an autoencoder with 650 inputs, two layers of 100 neurons each, then a bottleneck layer with two neurons only, followed by an inverted structure acting as a decoder. When training was complete, every representative’s vector was again propagated through the network, and activations of the two bottleneck neurons saved as a coordinate pair. These were then plotted on a 2D canvas, resulting in a image shown below.


Legend for clarification:

  • brown (SMC), blue (DeSUS) and red (SD) are leftist position parties with heavy majority in the parliament. Much could be said about their leftism, but let’s leave at that.
  • violet (SDS) and green (NSi) are rightmost opposition parties. They are vehemently anti-communist (SDS) and catholic-conservative (NSi)
  • rose is oppositional ZAAB, which is a party of former prime minister Alenka Bratušek. It leans to the left.
  • grey is oppositional ZL, which is Slovenia’s version of Syriza.


The dataset was relatively small, so the autoencoder was implemented in JavaScript with Karpathy’s excellent convnet.js library. Training took two hours on a i7 machine with 16GB RAM.
As a small branch of this project, we also tried to arrange the representatives on an ideological spectrum. For this, a similar neural network was used, but we first trained it with the most left-and right-leaning representatives to obtain extremes, then fed the others through it and plotted the regression scores in one dimension. This arrangement is somewhat different than the final one.



Principal component analysis

Next on was an attempt to validate our results with PCA. Principal components analysis is a (quote) “technique, used to emphasize variation and bring out strong patterns in a dataset. It’s often used to make data easy to explore and visualize” . It’s essentially a method for projecting data from multidimensional space to a lowerdimensional (say 2D) one, where we try to retain as much information as possible. The first axis is chosen so that the variance along it is maximized, maximizing the information, the others follow in a similar fashion, with the constraint that they must be orthogonal.
We ran PCA for one- and twodimensional solutions, giving solutions on images below.



Here’s the one-dimensional variant:



Finally, we used t-SNE algorithm with one- and twodimensional solutions. t-SNE (or “t-distributed stochastic neighbor embedding”) is another technique for dimensionality reduction, well-suited for visualizing complex datasets in 2D or 3D. You’ll mostly see it in articles dealing with classification of complex data, for instance images and words, where you can see nice plots of similarily-themed images or words with similar meaning clustered together. Here we used it on our voting data, and the results were quite good. First we tried a 2D visualization. It’s roughly similar to the one derived from autoencoder.

Dot sizes correspond to voting attendance. You can see that the representatives with lower attendance are drawn to the center. Also, note that the violet group (SDS party), which is the true and fervent opposition, is relatively close to those with lower attendance. This is simply because the opposition frequently employs obstruction as a parliamentary tactics, or are simply not there due to other reasons.

The neutral control point is the azure rectangle in the center. It’s simply a hypotethical rep that always abstained.


See the voting records for the opposition (yellow is absent, red is a vote against, blue a vote in favor):


And here are records for some ministers:


Compare these with the position:


Partly confirming validity and possible artefacts, we moved on. What we realized so far was that the absences introduced errors in position, and that these errors tend to draw those absent towards the center, possibly confusing the arrangement in a way that some people could wonder: what does this clearly positional rep do close to the opposition? Is (s)he leaning towards them in voting? No, this is simply an artefact that absences introduce into the positioning due to the way these methods work.

Then we decided that we’d maybe like a simpler visualization, one that is more suitable for a paper medium. So we ran t-SNE again in one dimension, then we used a “beeswarm” layout to sketch things out. The beeswarm is essentially a one-dimensional layout, in which some clustered elements are pushed onto a plane to avoid overcrowding on the single axis.

Finally, and mostly for aesthetic reasons, we converted that into hex-binned layout. Number of hexagons corresponds to number of dots above, but voting attendance is encoded in opacity, and party affiliation is represented by color. Here is a sketch:


Here’s a closer view of the opposition:poslanci_hex_close

As a final step, we removed everyone who was present at less that 200 voting sessions, and also added three control points:

  • neutral: a hypothetical representative who always abstains,
  • all yea: a hypothetical representative who always votes in favor of the proposition,
  • all nay: a hypothetical representative who always votes against the proposition

The neutral control point neatly bisects the space between the position and opposition, not counting the mostly absent representatives from the position. The other two would be relevant if all propositions came from the position – it would then be at the extreme pro-government position. In reality, many acts are proposed by the opposition, so they are just not relevant. In the image below, these are the azure hexagons. Neutral is the leftmost one.

And here is a finished version, expertly done by a pro designer:


Added bonus: visualization of tSNE in 3D:


Closeup on opposition:



So, what does this visualization really show? I’d like to say that since the acts subject to voting are mostly put forward by the governing coalition, it’s an arrangement of representatives on a continuum of support for government policy. But that is simply not so, as many acts are proposed by the opposition. It’s more like that the arrangement depends on an individual’s position in relation to majority’s vote, which might or might not relate to the above.

This often coincides with arrangement on ideological spectrum, but it’s not the same. You might wonder what is a cluster of weakly-colored representatives in the right-middle. These are mostly ministers that cast a few votes in the beginning of the parliamentary term, but which then left to be members of actual government. They still were members at one point in time, so we included them in our research, but we might have easily dropped them, since they don’t really figure in day-to-day parliamentary work.

Most of the errors and contra-intuitive positioning are due to gaps in representatives’ voting records. These methods compare voting records component-wise, so if, for example, we have two members of the same party, who substituted for each other (one was there when another wasn’t, as is the case with the ministers), we can only compare their available records with everyone else, but not among themselves.

t-SNE was also done in JavaScript, with another one of Karpathy’s libraries (tsnejs).

Here’s a final look at the data: a hierarchical clustering of all the representatives, including those mostly absent.


Original datasets and code are available by request. My mail address is on the About page.

Analysis of traffic violations in Slovenia between beginning of 2012 and end of 2014

Share Button

This is my first attempt to use open data for data visualization in web presentation and for a mobile app. The idea was to cross-pollinate promotion, but it didn’t go so well – more on this later.

The analysis is published on a separate URL due to heavy use of JavaScript, which complicates things in WordPress. Click link above or the big image with parking ticket to read it.

Parking ticket
Parking ticket

According to data provided by state police, highway authority and local traffic wardens, there occurred a little less than a million traffic violations between start of 2012 and September 2014. Given that there are 1,300,000 registered vehicles and 1,400,000 active driving licenses in the country, this is a lot. A big majority of them are parking and toll tickets.

In the main article, there are a lot of images and charts. For example, I analyzed data for major towns in Slovenia to get the streets with the highest number of issued traffic tickets. Here’s an example for Ljubljana:

Parking tickets in Ljubljana
Streets with parking tickets in Ljubljana – click to read article

I had temporal data for each issued ticket, so I could also show on which streets you are more likely to be ticketed in the morning, midday or evening. On the image below, morning is blue, midday is yellow, and evening is red.

Tickets issued by hour
Tickets issued by hour – click for main article

This is, however, only the beginning. Here are questions I tried to answer:

  • Are traffic wardens and traffic police just another type of tax collectors for the state and counties?
  • Do traffic wardens really issue more tickets now than in the past, or is that just my perception?
  • Which zones in bigger towns are especially risky, should you forget to pay the parking?
  • Are traffic wardens more active in specific time intervals?
  • Does the police lay speed traps in locations with most traffic accidents? What about DUI checking?
  • How does temperature influence the number of issued traffic tickets?
  • Does the moon influence the number of issued traffic tickets? If so, which types?
  • Where and when are drivers most at risk of encountering other drunk drivers?
  • Where does the highway authority check for toll, and when to hit the road if one does not want to pay it?
  • How can we drive safer using open data?

Be sure to read the main article to see all the visualizations and interactive maps. There are also videos, for example this one, showing how the ticketing territory expanded through time in Ljubljana:

Parkirne kazni v Ljubljani 2012 – 2014 from Marko O’Hara on Vimeo.

Some other highlights:

The big finding was a sharp increase of number of parking tickets issued in Ljubljana by the end of 2013, which coincides with publishing of debt that the county has run into:

Increase of parking tickets issued in LJubljana
Increase of parking tickets issued in Ljubljana

There’s an interactive map showing the quadrants with most DUI tickets and their distribution by day of week and month in year:

DUI distribution
DUI distribution

Mobile app for Android

Mobile app for android - start screen
Mobile app for android – map

I also wrote an Android mobile app (get it on Google Play if you are interested) that locates the user and shows locations of violations of selected type on the map, as well as a threat assessment, should she want to break the law. Here’s the description on Google Play:

The app helps the user find out where and when were traffic tickets issued in Slovenia, thus facilitating safer driving. 
Ticket database is limited to territory of Republic of Slovenia.

Choose between these issued citations to show in app:
– parking
– speeding
– driving while using a cellphone
– ignoring safety belt laws
– unpaid toll
and traffic accidents.

The app will locate you, fetch data about traffic citations issued in your vicinity, and show them on map. To see citations, that were issued somewhere else, click on map. Additionally available is summary of threat level, derived from statistical data, collected by government agencies.

Locating the user and showing dots on map wasn’t really a challenge, but I wanted to show a realistic threat assessment, based on location and time. To do that, I wrote an API method that calculates the number of tickets issued on the same day of week in the same hour interval and then draws a simple gauge.

Let’s say, for example, that you find yourself in the center of Ljubljana on Monday at noon, don’t have the money for parking fee, and you really only want to take a box to a friend who lives there. You’ll be gone for ten minutes only, so should you risk not paying the parking fee?

The app finds out the total number of tickets issued on Mondays in the three-hour period between noon and 3 PM, then graphically shows the threat level along with some distributions, something like this:

Threat assessment
Threat assessment

It works pretty well, and I use it sometimes, although I admit that its use cases may be marginal for majority of population. It does get ten new installs a day, although I don’t know how long this trend will continue.

I did send out press reviews and mounted a moderate campaign on Twitter (here’s the app’s account), but it amounted to precious little. Maybe the timing was bad – I launched it during Christmas holidays, when Internet usage is low. Or this type of app just isn’t so interesting.

I’m currently working on analysis of parking tickets for New York City, maybe that will be more interesting. There were, after all, more than nine million tickets issued there, and data is much richer.

Stay tuned!

A project for Transparency International Slovenija – visualization of lobbying contacts between state officials and lobbyists

Share Button

On the basis of previous post, Transparency International Slovenia asked me to collaborate on some projects. This is one of them, and it was launched today on a separate site: (English:

It’s an attempt to visualize several networks of lobbyists, their companies, politicians and state institutions. Perhaps the most interesting part is the network of lobbying contacts, which was constructed with data containing around 700 reported contacts between 2011 and late 2014.

As you may imagine, not every lobbying contact is reported. For those who are, records are kept at the Komisija za preprečevanje korupcije (Commission for prevention of corruption, a state institution). Transparency International Slovenia obtained those records as PDF files, since the institution refused to provide them in a machine-readable format. They hired a few volunteers to copy and paste the information in spreadsheets, then handed them to me to visualize them.

You can see the results below. Click here or the image to open the site in a new window. It’s in Slovenian. For methodology, continue reading below the image.

App screenshot - lobbying contacts
App screenshot – lobbying contacts


Network construction

The meaning of every network is determined by the nature of its nodes and connections. Here, we have four node types:

  • lobbyists
  • those who were lobbied – state officials
  • organizations on which behalf lobbying was performed
  • state institutions at which the abovementioned officials work

Lobbying contact is initiated by a company or an organization, which employs a lobbyist to to the work. These people then contact state officials of a sufficient influence, who work at appropriate state institution.

So an organization is connected to the lobbyist with a weight of 2, the lobbyist to a state official with a weight of 1, and state official to her institution with a weight of 2. The weights signify the approximate loyalty between these entities. We presupposed that lobbyists are more loyal to their clients than they are to the state officials, with which they must be in a promiscuous relationship. Furthermore, the state officials are also supposed to be more loyal to their employers than to the lobbyists, although this is a daring supposition. But let’s say they are, or at least that they should be.

After some processing, the network emerged. Immediately apparent are the interest groups, centered around seats of power. Here’s an image of the pharmaceutical lobby. It’s centered on the Public Agency for Pharmaceuticals and Medicine. Main actors of influence are companies such as Merck, Novartis, Eli Lilly, Aventis, etc.

Pharmaceutical lobby
Pharmaceutical lobby

A click on the agency node brings up a panel with some details, such as a list of companies (font size indicates the frequency of contact), lobbying purposes and a timeline of lobbying contacts. Here we can see that Novartis and Krka were most active companies, and that they lobbied for purposes of pricing and to limit potential competition by producers of generic drugs.

You can explore the network by yourself to see the other interest groups.

Who lobbied the drug agency?
Who lobbied the drug agency?


Some advice from Information Commissioner

Unfortunately, we had to omit lobbyists’ names for reasons of supposed privacy. The Information Commissioner strongly advised us not to display them on the basis of some EU ruling. I’m not an expert in EU law, and perhaps there are good reasons for this. On the other hand, there may not be. I fail to see why this information would not be in public interest, since these decisions have an impact on a significant number of taxpayers, if not all of them.

Anyway, we have the names. After all, we had to use them to connect the network. They are present in raw data, just not displayed.

We’re are probably going to continue developing this project, as new information comes to light and new rulings regarding privacy are issued.

Stay tuned!