Will the Semantic Web be Next? Will it be Web 3.0?

Web 3.0 – hither and yon

Let’s turn our attention to what is next for the World Wide Web. It would appear that most commentators are focused upon something called “The Semantic Web.” The goal of Semantic Web is to enable automatic machine generation and processing of content. Semantic Web demands rich machine recognizable semantics in web pages so that machines can understand web content. The aim of the Semantic Web is to build a world of intelligent and inter-communicable web pages. In plain language, to achieve this objective everyone and everything on the Web needs to use the same approach to content. People understand statements because of syntax rules. For the Semantic Web to work, then there will need to be a common syntax that computers will all understand. The syntax of any language defines the rules for building the language statements. The work towards a Semantic Web is trying to describe all things in a way that computer applications can understand. This is where the controversy begins. Not everyone can agree on one approach. Another controversy is whether the benefits would outweigh the costs even if we agree on the technical elements.

So what would the benefits be? Well, the use of consistently universal formal terms and structures will allow all content – text, video, audio, photos, etc. – to be recognized, explained and publicly understood by humans and machines alike. The process of recognition, explanation and understanding is actually a process of semantic annotation or semantic authoring. Hence we come to the name, Semantic Web or Web 3.0. Semantic annotation and authoring in Web 3.0 will allow content in web pages to map to a formal ontology of definitions, which can be recognized and processed by machines. Clearly this is easier said than done. But why is this so important?

By using common annotation and language, machines will be able to use logic to make inferences and draw conclusions. This concept is tied very closely to that of artificial intelligence. The inferences and conclusions will not always be exact or correct, but they can improve over time.

What are practical examples of how this could benefit the average web user? For one, shopping would be made faster, more informed and would most likely save money. If information on all merchandise, whether music or cars or shoes or electronics were stored in a common method, then intelligent web applications (the machines) could collect the information from many different sources, combine that information and present it to users in a meaningful way. Moreover, web applications could be set up to monitor information and present it whenever there is a meaningful change in price, availability or features. For anyone who has tried to buy a car, or get information about prescriptions, or travel by air, can immediately see the benefits of this type of functionality. Further, managing financial information, computer updates to software, or even social contacts would all be made easier if there was a common semantic language for data,

Web 1.0 was conditional and reactive. Web 2.0 began to build social and common language connections. For Web 3.0, Semantic web pages will be embedded with web services like those described above. As a result, remote machine agents can understand the content on local systems, and local system agents can understand the meanings of remote requests.
If the Semantic Web were realized, the web would become like a society of educated people. This dream, however, has yet to be realized. There are many reasons for this. Besides all the technical difficulties, one main obstacle is the issue of consistent communication protocol. The consortium promoting the Semantic Web has developed RDF or “Resource Description Framework.”

So what have we observed in the web evolution to date? Web 2.0 represents a new generation. In terms of content, linkage and services, Web 2.0 significantly advances over Web 1.0. Content and linkage in Web 2.0 rely heavily upon collaborative activities. In contrast, the majority of content and link in Web 1.0 was based on independent activities of webmasters. In addition, Web 2.0 has advanced the types of services with web feeds and web widgets. This has moved from a “conditional reflex” on Web 1.0 to a more dynamic and interactive set of services today.
What is next is still unclear, but it looks like making all data on the web more recognizable to machines will bring benefits. But the issue of time and money has not been adequately addressed.

Comments

Craig Daitch said…
Great article on the Semantic Web, Jeff. I think you hit the nail on the head when you mentioned consistent communication protocols being one of the largest inhibitors of Web 3.0. This is why I have a difficult time believing the dream of a fully automated 3.0 experience will be realized any time in the near future. Services like Cha Cha have taken an automated/human hybrid approach. Semantic scoring through the social media measurement and monitoring tools such as via products like Trucast and Relevant Noise do the same. The issue is scale. Can semantic web ever become 100% reliant on automation without human correction?

Our lexicon constantly changes. It's why contextual advertising can prove such baffling results. Regardless of the barriers though, I sincerely hope in the not so distant future, semantic web becomes a reality.

Popular posts from this blog

57 Years After the March on Washington, Have MLK’s Dreams been Realized?

Thinking Of Selling NFTs? Consider These Tech And Legal Factors First