Avec COETIC, optimisez votre conformité réglementaire

Votre partenaire pour la gestion et la validation de vos projets réglementés

mercredi 9 octobre 2019

AI, Data Integrity, & The Life Sciences: Let's Not Wait Until Someone Dies

By Kip Wolf, Tunnell Consulting, @KipWolf
The idea for machines that can think became the topic of science fiction in the early parts of the 20th century and made for interesting reads. Science caught up and the term “artificial intelligence” (AI) was coined by John McCarthy at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956, where the first AI program, the Logic Theorist, was presented by Allen Newell, Cliff Shaw, and Herbert Simon.1
AI research flourished in the early years until it was slowed by limits in computational power, but it was reinvigorated in the 1980s by both computational tools and investment when John Hopfield and David Rumelhart popularized deep learning techniques that allowed computers to learn using experience.1 The next limitation to AI advancement was in computer storage, which by the late 1990s was no longer a problem, as storage advancements produced cheap and ubiquitous solutions. In our modern world, we carry devices in our daily lives that dwarf the storage capability of supercomputers of only a few decades ago. AI has now gone mainstream, leaving the labs and coming into our living rooms with intelligent assistants (i.e., Alexa and Siri) and smart TVs. AI is on the news and on our tongues, as scarcely a week goes by without a television commercial or someone in our social circles mentioning AI. But what is AI and how might it impact our lives when applied to life sciences?
Garbage In, Garbage Out
For the sake of this discussion, we can agree that AI may result in predictions, classifications, and decisions from computational analysis of large data sets that is based on machine learning (ML) from representative data sources and further informed by said data and related results. In this context, AI may, for example, present significant potential for improving efficiency of research and development activities such as discerning viable drug targets for further investigation. Or, AI may offer greater capacity for manufacturing by reducing the potential for defects and accelerating product review, release, and disposition for shipping through the supply chain. However, there remains great risk with this potential for great reward, as mistakes or losses caused by poor AI results could cause negative impacts on public health...
Plus d'information ici.