The AI industry must enter its next stage of adoption to avoid the dreaded AI winter.
A AI winter is the traditionally cyclical fallow period of technology when innovation and R&D slow down or stagnate.
In its annual report “Adoption of AI in the enterprise” report published on March 30, technology education publisher O’Reilly Media found that there has not been much change in the adoption of AI tools and technologies by companies in recent years . This is the fifth year that O’Reilly has published the report.
According to O’Reilly, sectors such as IT and financial services are still most likely to adopt AI tools and technologies, while government and education sectors are still widely evaluating AI. .
When considering AI tools, organizations continue to favor open source machine learning frameworks such as Tensor FlowPyTorch and AWS SageMaker.
But the lack of dramatic growth in the rate at which companies use AI as part of their standard hardware and software suite may make some think the AI industry has kind of stagnated.
In this Q&A, Mike Loukides, one of the authors of this year’s report and vice president of content strategy at O’Reilly Media, said that companies’ attitudes towards governance, ethic and safety of AI must change to push AI adoption to its next stage.
What does the lack of meaningful progress in business adoption of AI indicate?
Mike Loukides: It feels like we’re kind of at a crossroads. AI is kind of in a dangerous state, where in five years are we going to find that we’ve built a lot of AI systems and they really aren’t what we want?
A couple of things that bothered me was that the practical interest in ethics was exactly the same as it was a year ago and, relatively speaking, not that high on people’s list of concerns.
Even more worrying, after everything we saw last year with security, they were dead like they were last year. Whatever we should have learned about security after a very bad year with ransomware and all sorts of other attacks, the AI community doesn’t seem to be learning. So I think that’s a big problem.
Is there a disconnect between what we hear about AI governance and what companies are actually doing?
Loukides: It’s really important that people understand responsible AI, responsible computing in general. I do not know [if the} the message is getting across there.
What might spur enterprises to take AI governance and ethics more seriously?
Loukides: I think what will cause a change is that we’re increasingly seeing regulation like GDPR (General Data Protection Regulation) and the California Consumer Privacy Act. Those regulations will force change.
The problem with regulation is that it’s often badly thought out.
Regulation is important, but, particularly with technology, it’s often done poorly.
What are some of the security problems with AI for enterprises?
Loukides: I think the biggest security problems around AI are going to come up when you start seeing … more data poisoning attacks. For example, if a company creates a chatbot to do customer service, you are almost certainly going to have some class of people come along who think it’s fun to see if they can get it to become racist or misogynistic.
We don’t have good ways of getting a handle on that. One problem in the industry is that people are not terribly used to thinking about what can go wrong with this.
A point that the ethics community makes [is] as long as development teams are predominantly white and male, you won’t have people who are sensitive to the issues that people are actually facing in the real world.
There is a big problem of cultural sensitivity. That tends not to happen if the development is done by…the former white boy network.
Editor’s note: This interview has been edited for clarity and conciseness.