From Science 2.0 to Pharma 3.0
eBook - ePub

From Science 2.0 to Pharma 3.0

Semantic Search and Social Media in the Pharmaceutical industry and STM Publishing

Hervé Basset, David Stuart, Denise Silber

Condividi libro
  1. 286 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

From Science 2.0 to Pharma 3.0

Semantic Search and Social Media in the Pharmaceutical industry and STM Publishing

Hervé Basset, David Stuart, Denise Silber

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Science 2.0 uses the resources of Web 2.0 to communicate between scientists, and with the general public. Web 3.0, in turn, has brought disruptive technologies such as semantic search, cloud computing and mobile applications into play. The term Pharma 3.0 anticipates the future relationship between drug makers and doctors with their patients in light of such technology. From Science 2.0 to Pharma 3.0 examines these developments, discussing the best and worst of Web 2.0 in science communication and health. Successes such as the Open Access phenomena and also less successful networks are covered. This title is divided into three parts. The first part considers the Web 2.0 revolution, and the promise of its impact on science communication and the state of Science 2.0. The second part looks at impact on Pharma and Health, including attempts to utilise digital in Pharma. The last part looks at the promising disruptive technologies of Web 3.0, including semantic search in biomedicine and enterprise platforms. The book concludes by looking forward to developments of '3.0' in Pharma and STM publishing.

  • Gives a global overview of success and failure in Science 2.0
  • Presents useful stories and lessons learned
  • Gives a clear view of how semantic search is present in science platforms and its potential in STM publishing

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
From Science 2.0 to Pharma 3.0 è disponibile online in formato PDF/ePub?
Sì, puoi accedere a From Science 2.0 to Pharma 3.0 di Hervé Basset, David Stuart, Denise Silber in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Commerce e Business Intelligence. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2012
ISBN
9781780633756
Part 1
From Science 2.0 (2000–2010) …

Part 1 From Science 2.0 (2000–2010) …

Not only is the web increasingly embedded within people’s personal and professional lives, but in recent years there has been a significant change in the way people view the web: from one which most people primarily accessed in a readonly manner to one in which most people participate. This participation may be through a conscious decision to upload certain content, or through unconscious contributions, as a user’s online behaviour is monitored and mined. This shift is at the core of the Web 2.0 vision of the web that has increasingly spread out from its technological heartlands to influence the way vast numbers of individuals and organizations approach the web across many sectors of society as they attempt to embrace the potential of the web.
The first part of this book contains two chapters, the first of which is a discussion of the Web 2.0 revolution and its promise for Science 2.0. After a brief discussion of the coining of the term ‘Web 2.0’, the first chapter explores each of the three dominant threads of Web 2.0 and their application with the research process: the web as a platform; harnessing collective intelligence; and data as the ‘next Intel Inside’.
The second chapter discusses the gap between the potential promised by Web 2.0 sites and technologies, and the reality of Science 2.0. While some technologies have been enthusiastically embraced by certain researchers and funding agencies, the appeal is by no means universal, and there is still a lot of potential to be realized.
1

The Web 2.0 revolution and the promise of Science 2.0

David Stuart

Abstract:

The term ‘Web 2.0’ was coined to refer to certain attributes that differentiated some of the most successful websites at the beginning of the twenty-first century from those that were less than successful. It gained widespread attention and has been adopted by individuals in organizations in many sectors. This chapter discusses three of these attributes in detail: the web as a platform; harnessing collective intelligence; and data as the ‘next Intel Inside’. Particular attention is given to their application within science, where the ideas can be translated as transforming the world of scholarly publishing, providing new opportunities for citizen science and even offering a new scientific paradigm.
Key words
Science 2.0
cloud computing
open data
linked data
citizen science
While the dot-com bubble burst in 2001, the growth and usage of the Internet continued: between March 2001 and March 2003 the number of Internet users worldwide grew from 458 million to 608 million, a growth of almost 33 per cent (Internet World Stats, 2011). In 2003 the term ‘Web 2.0’ was coined to reflect a new post-crash vision of the web. In his seminal paper, ‘What is Web 2.0’, Tim O’Reilly produced a list of features that distinguished those websites that had most successfully survived the dot-com bubble from those that had not: the web as a platform; harnessing collective intelligence; data as the ‘next Intel Inside’; the end of the software release cycle; lightweight programming models; software above the level of a single device; and a rich user experience (O’Reilly, 2005). The term quickly caught on, and the ‘2.0’ suffix was added to everything from ‘library’ to ‘love’; it was used to reflect both a user-centric vision and the adoption of technologies that fell under the Web 2.0 banner by specific fields of study (Scholz, 2008). However, the term has not been universally popular. Some have questioned whether Web 2.0 is anything more than vague marketing jargon, while others have questioned the technologies and principles that fall under its banner. Nonetheless, whatever the merits of the term or the technologies and practices that fall under its banner, Web 2.0 has been a significant influence on discussions about how we use the web, and these technologies and practices have been adopted by a wide range of organizations and individuals making use of the web. In this chapter, the changing nature of Web 2.0 and the technologies involved, as well as their application within the realm of science, are discussed within the context of three of O’Reilly’s themes: the web as a platform; harnessing collective intelligence; and data as the ‘next Intel Inside’ (O’Reilly, 2005). While the details may have changed since 2005, these three themes continue to be the core of our understanding of Web 2.0. This is not to say that the other themes do not continue to be an important part of the web, but rather that issues surrounding the end of the software release cycle, lightweight programming models, software above the level of a single device and a rich user experience, are more appropriately discussed within the context of these three main themes. For example, the subject of lightweight programming models is discussed in the context of data as the ‘next Intel Inside’, and the subject of software above the level of a single device inevitably forms part of the discussion of the web as a platform.
This chapter could have focused solely on the harnessing of collective intelligence; as O’Reilly and Battelle (2009) stated in their revisiting of the subject of Web 2.0: ‘Web 2.0 is all about harnessing collective intelligence’. However, it is important to understand not only how people are harnessing collective intelligence, but also the computing paradigm within which it takes place, and this is discussed in the first section of this chapter: The web as a platform. What quickly becomes clear is how much the world has changed since 2005, as technologies and concepts that are now widely discussed and adopted were then still relatively niche products: the term ‘cloud computing’ was not popularized by Eric Schmidt until a conference in 2006 (Qian et al., 2009), Facebook was not made available to everyone until September 2006 and their platform was not launched until 2007 (Facebook, 2011), Apple’s iPhone was not unveiled until January 2007 (Honan, 2007), Amazon’s Kindle in November 2007, and the iPad in 2010. Each of these ideas and technologies has had an impact on people’s understanding of the web, the way they connect to it, and the expectations they have from it.
Equally as important as the changing computer paradigm has been the growth in the importance ascribed to data branded as the ‘next Intel Inside’. The leading-edge web-based commercial companies that were the focus of O’Reilly’s original paper have been joined by a wide range of nonprofit organizations, governments, research institutions and individuals trying to make data publicly available online. This wide range of individuals and organizations have different objectives for the data that they are making available, and while the lightweight programming model that O’Reilly discussed continues to have an important role to play in making data available, there is also increasing interest in other approaches, such as making data available in a linked data format. As will be discussed later, while linked data is not the lightweight programming model that was the driving force behind the adoption of early application programming interfaces (APIs), the higher level of complexity offers the opportunity for both a more semantic web, and one that is more integrated.
The adoption of Web 2.0 technologies and ideas in the realm of science has resulted in the inevitable coining of ‘Science 2.0’. The term has been used variously to refer to different stages of the research process: Shneiderman (2008) uses the term to refer to an approach to scientific investigation that makes use of Web 2.0 technologies to gather data, whereas Waldrop’s interpretation focuses on the use of Web 2.0 technologies for communication within the scientific community (Waldrop, 2008). Both are an important part of the future of science, and within this chapter a broad definition of Science 2.0 is taken to include both of these uses. Burgelman et al. (2010) have identified three significant trends in Science 2.0: a growth in scientific publishing, a growth in scientific authorship and a growth in data availability. These trends may be seen as the scientific equivalents of the three Web 2.0 themes selected for discussion: the web as a platform has enabled a wide variety of new types of publication; harnessing collective intelligence is recognized not only to include those within the traditional science community but also to embrace those beyond the walls of academia; and data as the ‘next Intel Inside’ offers a new approach to science.

The web as a platform: web services, the cloud and the app

In What is Web 2.0 the first principle of Web 2.0 is the use of the web as a platform, emphasizing the value of providing services automatically through the web rather than focusing on the traditional desktop software or services that necessitate lengthy human negotiation (O’Reilly, 2005). There is now a host of software available through the web: from those that are so integrated in our experience of the web that we do not give them a second thought, to those that have been so embedded in our desktop experience (and have grown to such complexity) that web versions continue to feel like poor imitations.
Probably the most important piece of software we use on the web is one we do not even think of as software: the search engine. While it is so integrated with our online experience as to be for some people synonymous with browsing, a consideration of the alternative of a search engine application running on a person’s personal computer (PC) quickly demonstrates some of the advantages of using the web as a platform. In simple terms, a search engine can be thought of in three parts: the robot, the indexer and the ranking algorithm. The robot (also known as a web crawler or spider) is a program that downloads pages from the web in an iterative fashion. Starting with a list of seed URLs, a robot will download the web pages at the URLs and extract any URLs it finds in those web pages, which will then be downloaded in turn; a process that is repeated until all the web pages that are required have been downloaded. To create a search engine that is in any way equal to that of one of the major search engines would require the robot to download billions of pages, and with new pages being created all the time and many of the pages updated on a regular basis it would be an endless task. The index enables the search engine to match documents to a query without having to search through every document each time a query is entered. At the very minimum, an index is likely to include all the words in a document, although it may also include a host of other features, such as anchor text on those pages linking to a site, the type of page (e.g. .pdf, .rtf or .doc), or the creation date. While a more extensive index may help with the retrieval of more pertinent results, it will nonetheless take up more computer processing power. Finally, with the simplest of queries returning millions of hits, the results need to be ranked. While this may once have been based on the frequency or position of search terms on a page and how other websites link to a site, a search engine such as Google now uses over two hundred signals when ranking web pages (Google, 2011). The downloading and ranking of billions of web pages would take huge bandwidth and processing power, far beyond that which is available to the average user. Even if it were possible, it would be a huge waste of resources, as few of the pages would ever need to be discovered by the person who had downloaded and indexed them. Although there will be popular searches that many people use, these will be dwarfed by the long tail of niche searches that will only be used occasionally by a handful of people (Anderson, 2006). It is only efficient to index the web when there is a sufficiently large audience to make use of the index. The large audience can also provide useful feedback for a search engine, and help to improve the overall system. If, for example, on one particular search, people are regularly clicking on one particular link rather than another, the search engine can start to rank the selected item more highly; this is an example of harnessing collective intellig...

Indice dei contenuti