AI can synthesize and reproduce the combination of words it has seen before. It can provide helpful recommendations or predict the next word in a sentence. Yet, when striving to understand something new to human knowledge, or where context is critical, or where human experiences play a major role in the understanding, there are severe limits to the technology’s abilities. When applying generative AI in an advanced field to push knowledge forward, the systems are not yet in a place where they can reach full autonomy.

AI’s Limitations

Over-reliance on AI’s ability has manifested itself in many ways. Earlier this year, a study by the National Highway Traffic Safety Admin shared data showing hundreds of accidents and several deaths over one year connected to driver assistance tech. This technology may be groundbreaking and exciting, but it needs to be utilized with a watchful eye. These programs are still so new that we must monitor them to be certain outlying factors aren’t causing accidents.

There continue to be more and more examples of AI not living up to its promised role such as stickers on stop signs. A sticker on a stop sign could lead to a self-driving car not recognizing that it needs to stop, which could result in a serious accident. There was also the infamous story of the self-driving Uber not trained to stop for jaywalking pedestrians which ended up killing someone. These factors can cause skewed results in data for training future models, but they can also be dangerous to human life.

Another example of AI not proving reliable is when scientists at the National Institutes of Health showed how AlphaFold2, a remarkable advancement, failed to predict protein fold switching. The AI can synthesize the new abilities of creating the protein models, but this further proves that it does not yet have the human knowledge needed to predict something so vital, resulting in dire inaccuracies.

More recently, we see major issues with AI chat technologies fabricating answers, convincingly, sharing wrong data, and otherwise completely hallucinating – in smaller ways misleading the user, and in bigger ways, spreading misinformation or suggesting dangerous or criminal ideas or actions.

The balance here comes in utilizing AI for its benefits while mitigating its downsides as much as possible. This reflects both the extremes and the dangers of AI systems left unchecked. Judea Pearl, a pioneer in AI, said in Quanta Magazine, “all the impressive achievements of deep learning amount to just curve fitting.” This is translated much more crudely as “garbage in, garbage out.” A machine is often discarded for its faults rather than learning what caused the mistake to create a solution or an improvement. Those faults could also be ignored, which would create misleading research.

 

Advancing AI in Systems Across the Enterprise

The answer to AI’s reliability problem is in the tools we create — tools to augment and complement AI with rigor, trustworthiness, and ways to amplify human judgment. Much work is happening to move towards this, including enabling better monitoring, root-cause analysis of faults, simplifying retraining, understanding bias, improving interpretability, and more. These tools should be paired with a human-centered AI approach, where AI techniques amplify and support our abilities and respect and preserve the human context. When we think about the technology as augmenting humans, AI actually works. When we think about it as a replacement, AI fails.

Not long ago, there were predictions that radiologist jobs were in imminent danger and that AI would soon replace these roles. We’ve consistently seen that AI can help, not replace, the humans in these roles. Understanding how technology can amplify human experiences in these areas will lead to a continued broader interest and advancement in AI. Radiologists are now using AI to amplify their abilities to read vast amounts of images to identify and detect diseases. They must monitor these systems though, because something that may seem as small as markings on radiology images could lead to inaccurate test results.

 

An interactive loop is possible across the enterprise. AI systems can help us advance hypotheses, process and synthesize results, and iterate. Such an AI-augmented approach is not only possible but is the best way to come up with amazing discoveries and insight. Again, this requires tools that provide scalable monitoring solutions. These tools are key to ensuring that the risks associated with models – drift, uncertainty in the data, lack of documentation, lack of clarity on lineage, etc. – are minimized so we may freely use AI to amplify our judgments.

 

The Future of Productive AI

We will see a shift toward parameters or “features” for AI to consider how it works with humans rather than to replace them. AI techniques help improve efficiency – encouraging AI symbiosis into our daily lives. Incorporating cooperation and human feedback will create intelligent systems that save us time and energy to use our cognitive thinking more effectively.

 

We are currently in a crucial phase of AI development — deciding how it will work in our lives in a trustworthy, explainable, and beneficial way. This necessitates features like monitoring and automatic retraining that enable companies to make scalable, effective AI solutions and ensure models are not becoming biased or unfair in their behavior. Trust, more than the technology itself, is our most significant responsibility. When used to amplify human abilities, the technology truly has unlimited potential to accelerate humanity’s advancement.

Our Approach

H+AI is the philosophy that underpins all of the work we do at Vianai, all of the products we build, and how we work with customers.

Our ML monitoring solution enables high-performance ML operations at scale across the enterprise, to enable detailed monitoring, root-cause analysis, retraining and model validation in a continuous loop across large, complex, feature-rich models – ensuring models are trustworthy, explainable and transparent.

Our performance acceleration technology aims to bring down the cost and resources needed to run AI, to increase access and ensure AI is more responsible in terms of cost–performance and environmental impact.

Dealtale brings conversational AI that sits on top of marketing, CRM and advertising platforms, and causal inference – advanced AI techniques – directly to marketing professionals.

Finally, hila, our AI-powered financial research assistant was built from scratch with reliability in mind – our document-centric approach helps us to ensure that answers are accurate, including providing citations from the financial text.

To learn more about our high-performance ML monitoring capabilities that can help your business tackle AI’s reliability problems, request a demo here. To learn more about all of our products, get in touch here. We would love to connect!

Be sure to also check out our new video series, A Conversation about Human-Centered AI, in which we tackle various aspects of AI’s reliability problems, and how we can work to solve them. The first episode, Hype vs. Reality, is live here.