Transparent algorithms: Why Google and Facebook not release their algorithms as Open Source?

The algorithms involve us, but they do not know what they actually do. Google presents search results in a certain order and orders Facebook news and updates on our timeline according to certain criteria, but these recipes are secret and are jealously guarded by those who use them.

Should companies explain in detail what those algorithms? Some argue this option and make the analogy with the free software, a philosophy in which the code is published, shared, edited and redistributed freely. The debate on the transparency of the algorithms is more fashionable than ever.

Facebook in the spotlight

The Trending Facebook feature is a good example of such algorithms. Although it is assumed that who you really are that algorithm power users and how they use the social network, the operation of this program is totally unknown.

Facebook algorithm has a limitation: it is created, managed and fed by humans. Are we or rather Facebook- employees who decide what data is valid, what is done with them, and what response you get from them. An algorithm behaves better or worse shape is not just the fault of Facebook, but our, users, we feed this algorithm with our activity and our comments.

Transparent algorithms
Image Source: Google Image

In Slate they tried to explain how this algorithm works in the offices of Facebook in Menlo Park, California. Its main function is to sort the news and try to highlight those that are most relevant for each user. And that’s where comes in the relevance of each article published in the world probably have more value if it has been shared by our contacts on Facebook, and if it is a subject that we usually interested, for example. But no one knows exactly -except engineers Facebook-what makes a content more or less desirable for certain user.

These are actually two variables that the algorithm takes into account, but Facebook confess that their algorithm takes into account hundreds of variables. Analyze how you behaved in the past, predicts if you like, or do click on the link, or commentary in it, or maracas as spam: all of these – and other elements that form part of many- relevance score that will be assigned to you as a user and to that particular content.

Throughout this process it is necessary to emphasize that the role of the human being is fundamental: the algorithm only solves part of the needs, and indeed a recent article by Gizmodo revealed how those suggested news had little algorithm and much of filtering beings human: a group of journalists former employees (subcontractors, yes) of Facebook spoke of his term as editors in chief of a section that can reach condition our way of seeing the world.

That ‘s right: this group of about 20 editors has been declining, and some believe that the future of this filtering task will be dominated by an algorithm or artificial intelligence system that has just been learning how to work with these experts. The filtering process is already documented -Facebook has tried to clarify part of his way of working in this area- and in fact anyone can view the report, entitled Trending Review Guidelines.

Dark algorithms

The problem with secret recipes is to rely on them can give unexpected results .We spoke recently Case COMPAS, an algorithm that serves as controversial “helper” for the judges of the judicial system in the United States: this system advises judges on the penalties to be imposed on those convicted of a crime.

COMPAS The problem is that judges have taken it too seriously, and that has made some decisions have been widely criticized and have triggered the debate on the validity of this algorithm. Impossible to know for sure whether it is valid or not without being able to audit it, and hence the increasing pressure by the transparency of the algorithms that dominate our lives.

We have another good example in the Social Security US Administration, the agency that handles pensions for various reasons and assigns those pensions according to certain reports in which it is predicted demographics as mortality rates or economic as rate unemployment.

A study by Harvard University has revealed that these predictions were not being precisely objective and did not account relevant aspects of socioeconomic status, but because, once again, the algorithm is secret, its conclusions are debatable, especially in a society of law in which transparency in all public efforts should be exemplary.

Google also concerned

But as in all these secret algorithms, there is a holy grail. The recipe for Coca-Cola has its analogue in the world of internet: what is the algorithm that Google dominates the search?

The search engine Google has long been the clear leader in this area, and globally 9 out of 10 searches performed on the network of networks running through Google.com or any of its national versions. The company has jealously guarding the secret of their algorithm all these years, and although revises often just provide information about what parameters influence on an outcome appears above another in that search engine. As far, it indicates major trends such as the latest benefiting websites that are generally “friendly on mobile devices.”

As in the case of Facebook, Google search shaping our understanding of the world. The filters and directs it , so I understand how it works would be especially interesting. There have been attempts to pressure Google to reveal the secret formula: last year the French Senate tried to pass an amendment that would make Google revealed its criteria and positioning that would enable regulators in the field of telecommunications inspect the code.

You may also like to read another article on iMindSoft: Privacy differential: It boasts Apple, Microsoft and Google created it uses

What and how are trying our data?

The data will save us. That’s one of the messages that have given us the defenders of Big Data. Celebrities like Bono or Bill Gates are called promoters activism, the activism of the facts and data that feeds this massive collection theoretically designed to improve our society by analyzing all that data.

Promises Big Data and expectations were huge, and not long ago we talked about how some algorithms seem to know us better than ourselves. The reality seems to be quite different, and although obviously there are areas in which this discipline has proved its worth, doubts about the Big Data phenomenon are in a significant time.

Evidenced for example the famous cycle Hype Gartner, which in 2013 stood at Big Data in the “Trough of disillusionment”, the area in which industry and all users start to wonder who is using the technology, how, and for whose benefit.

Concern began to be apparent especially after the publication of “How companies learn your secrets” an article in The New York Times in which he revealed how the retail chain Target managed to deduce that a woman was pregnant even before her husband I knew it . That, as they say in TechRepublic, suddenly made many would realize how far I could get this trend.

Although of course there are positive examples of the use of these vast amounts of data, many of the companies that exploit do not explain clearly and in detail how they work algorithms work. This is where laws like the Freedom of Information Act (FOIA) in the United States precisely tries to defend the rights of the citizens of that country access to information and data that are stored in the federal government.

Our country also launched similar legislation with Law 19/2013 of December 9 in which it was to ensure “transparency, access to public information and good governance” and that was reinforced by the call Transparency Portal of the Government in which diverse data and information are published but has failed to quell criticism that existed in this regard.

Still poses the same problems from the start: no right of access to information as a fundamental right, it excludes many types of information (notes, drafts, opinions, summaries, communications and internal or between organs or administrative entities reports) provides a double negative administrative silence (the administration cannot answer and the application shall be dismissed and the review body can do the same) and the Council of Transparency is not independent (also with the latest amendments this review body will be completely politicized)

Algorithms Open Source

In that debate on transparency governments usually speak of data and information, but never algorithms. It is good to have public access, at least, some public – access that data, but it is worrying that the same does not happen with the algorithms that operate them. Or not?

That’s what I wondered at the International Association of Privacy Professionals (IAPP), a community of experts to discuss the potential algorithms transparency in the described as “delicate”.

Even if a company publishing a proprietary algorithm, the task of understanding and reacting would be extremely complex. It is unlikely that consumers and governments can understand what he says or means algorithm, which probably suffered continuous changes in time to react to new data entries, and it would be difficult to decide how to measure, where possible look injustices entries, outputs, decision trees or any effects. These challenges could even make companies are very careful to avoid discrimination not know what the best practices in this regard are.

Some go further. Those responsible for Akashic Labs, a consultancy science data, explained in a presentation on the transparency of algorithms how to access the code of these algorithms was not enough: there should be no secrets in any field. The data inputs should be able to be scrutinized and should be offered a justification for the outputs of these algorithms from the inputs. For more visit http://reviewsgang.com/

Leave a Comment