Monthly Archives: February 2011

What obstacles do you see to the widespread adoption of the Semantic Web in automating useful tasks, and how could these be overcome ?

The semantic web, as described in an article of the Scientific American, seems a wonderful prospect. The possibility of computers being able to aid us more specifically in our daily tasks of gathering, reading and selecting relevant information is one that would save humans of a time-consuming task. However, there seem to be multiple difficult hurdles to get past in order to make this idea a reality.

First of all, for Semantic Web agents to be able to automate any task, it will need information readily available to work with. Today, this information is not yet available on the web, which means something has to be done to get the information out in the open (for the semantic web agents to use). The problem is, web developers today do not really have a reason for putting any information available in some sort of RDF format, as this is much work for little result (not many people are using Semantic Web agents, nor would those agents be able to link the information with many other resources, as they are still scarce).

However, this is not an “unknown” problem and one that is very similar to the problems today’s big websites have had to deal with and seems to have found an answer to : start small, specific and then add parts on top of it. The most known example of this is Facebook. There is no interest in Facebook if none of your friends / contacts are on there to talk / relate to or follow. However, by starting small (in this case, Harvard University), people quickly found most of their friends on the website and got interested. Once that was established, it was a matter of not opening-up the access too-fast, to keep this effect alive.
The same thing can be done for the Semantic Web, starting with a specific use, a specific user group and then adding on top of that, little-by-little.

A second obstacle for the Semantic web is that a clear revenue model will have to be developed for everyone. Websites like a doctor’s office, making available it’s opening hours, have a clear gain to making that information available to their customers (People would show up in time and get less frustrated and the doctor loses nothing). However, this is not true for everyone. Take pure content websites for example. They make their money with advertising and want people to come to their websites. If a Semantic Web agent would simply use the website as a proof (gather the information to come to some sort of conclusion) to get the user an answer to the question it seeks, the user will never see the advertisement and the content creator will not get paid (which, in turn, discourages him to write more and expand the web as a whole).

One way of countering this issue would be to incorporate ads within the RDF response that an agent gets from a website. An example of this could be : The Semantic Web Agent browses a website for “Opening Hours” and gets back “Monday’s 10h00 – 12h00” + “Café du soleil is giving 2£ off for their menu before 12h that day !”. This way, the user gets the information he needs and the Web Agent can decide if it wants to pass-on the advertisement as well (even if the user didn’t ask for it), depending on how the user configured his client. Maybe the website could even force the Agent to pass on the advertisement if it wants the rest of the information (the information the user is actually looking for).

The last obstacle, but not least, of the obstacles I see for the Semantic Web is technology-fear of the end-user. Today, we are starting to see more and more examples of people or government drawing lines to what information should be publicly accessible and what information should stay private. One big example of this is the Google Street View project that has caused major discussions all over the world. Technically, all it does is making available information that is already public, but in a more convenient way (browsable on the internet). This fact has made more then just a few people think twice about what they consider “private” and what they would not mind to share with the world. The same problem is going to be faced by the Semantic Web : although the information it will be based-upon will be publicly available (if not, it would be infringing the law directly), the fact that it makes it accessible in a more related manner, drawing the connections between different elements without the user having to figure it out on him self, will spark a new wave of debate on what we consider “private” information.

This issue, in my opinion, even though it is a mere technology-phobia experienced by people who do not really understand what is going on “behind the scenes”, is not to be underestimated. As in most large-scale technology projects, the users can easily end-up to be the largest obstacle.

However, if approached from the right angle and at the right speed, informing the people of the technology before implementing it and trying to explain it in a way that users really understand what is happening will go a long way. The rest is simply a question of time for people to accept that technology is moving forward, whether they want it or not.

We can see that many challenges lay ahead of the Semantic Web. However, as we have discussed in this document, there always seems to be a way around it. If we use the right approach and evolve at the right pace, not out-running the users, the Semantic Web simply seems like the next logical step to the internet. On top of that, I have never thought there was a real way of stopping evolution. Evolution will always find a way, like water, continuing it’s route to where it wants to go.

Tim Berners-Lee, James Hendler and Ora Lssila – Thursday, May 17th, 2001