Premium Only Content
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
July 16, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But it’s all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations? In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that. This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes. When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now But that’s far from being a piece of cake. First, let’s understand why AI interpretability is a must As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used. Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AI’s decision can have far-reaching consequences. Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage. AI interpretability is no easy task While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge. Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks — often used for tasks such as linear regressions or feature extraction — with the same amount of parameters and data. However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable. The structure of a deep neural network (DNN) (Image by author) To sum up, there’s often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark. Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique. What can be done? To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear mode...
-
24:17
Robbi On The Record
3 days ago $3.79 earnedDating Apps and Period Apps Are Playing the Same Game | ft Cybersecurity Girl
25.4K12 -
54:50
efenigson
13 days agoWhy Bitcoin Is a Lifeline for African Women - Lorraine Marcel | Ep. 109
1.23K3 -
2:14:57
TheSaltyCracker
3 hours agoAsymmetrical Warfare is Here ReeEStream 12-14-25
193K123 -
LIVE
SpartakusLIVE
4 hours agoLAST DAY of AMPED Mode || Solo V Quad and RANDOS
723 watching -
LIVE
Sarah Westall
1 hour agoWhat Congress Isn’t Saying About UFO Disclosure — And 3I/ATLAS w/ Ron James
281 watching -
LIVE
GritsGG
2 hours ago#1 Warzone Victory Leaderboard! 203+ Ws !
77 watching -
LIVE
VapinGamers
2 hours ago $0.22 earnedMegaBonk and Risk of Rain 2 - The Flu Cometh! - !rumbot !music
110 watching -
LIVE
Due Dissidence
12 hours agoBari Weiss SLAMMED For Erika Kirk Town Hall, MAGA World COMES FOR Candace, NEW EPSTEIN PHOTOS Drop
1,495 watching -
Joker Effect
1 hour agoDRAMA NEWS: IDUNCLE nukes his own streaming career! @Antislave, AK, Nokster, RKStackz chime in!
8.41K -
33:55
MattMorseTV
3 hours ago $0.57 earned🔴Trump announces MASSIVE LAWSUIT.🔴
25.6K54