This week’s guest is Akash Maharaj, Senior Data Scientist at Adobe. Highlights from our conversation include:
“When you’re picking an AI solution, picking as simple as possible to start with is actually always the best advice.”
“A lot of our customers, they come to us and they say, ‘Hey, why aren’t you using this really cool deep neural network with all these fancy bells and whistles?’ And we’re like, ‘Okay, yeah. Those things are really fragile.’”
“The most subtle data science decisions can have big ethical implications.”
This week’s guest is Andrew Burt, Managing Partner of the AI-focused law firm bnh.ai. Andrew’s interview is full of great insights like these:
“The biggest barrier to the adoption of AI and machine learning is not actually technical. The actual technology is fairly commoditized. The biggest barriers are risk-related and they’re policy-related and they’re law related.”
“AI is great, but if you want to be serious about responsible AI, you need to be ready to respond when something actually goes wrong.”
“Even without new regulations on AI, there are a whole host of laws and ways that AI can create legal liability right now.” …
This week’s guest is Brian Markwalter, SVP, Research & Standards at the Consumer Technology Association.
“We’re at a good point where people quit talking about AI for AI’s sake itself and focus more on, are we doing this right? Is it helping me? And are these solutions really good?”
“AI doesn’t help us with the fact that we all have different cultural backgrounds, different sensibilities around privacy, different judgment. …
This week’s guest is Paul Roetzer, the founder & CEO of the Marketing Artificial Intelligence Institute.
“Most marketers still don’t even know what [AI] is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil?”
“I am a big believer that the net positive of AI will be more jobs and it will create new opportunities for writers and for marketers. …
This week’s guest is Jennifer Bisceglie, CEO of Interos. For more than a decade, Interos relied on human experts to assess supply chain risk. Then came AI — and COVID-19.
“The problem that we focused on never changed. The technology and what technology is available to solve the problem, that’s what’s changing.”
“We will always have people, for the foreseeable future, involved from the beginning to the end. …
This week’s guest is Zayd Enam, co-founder and CEO of Cresta.
“The really big change that happened was with the printing press, when folks could sit down, identify and extract the lessons that they had, and write it down in a book, and then replicate that book and share it with everyone else. . . .
“AI is going to become the book that writes itself, where it’s constantly learning and identifying what are things that are successful and what is the knowledge to extract from each example — training example in the world.” …
This week’s guest is Chris Drumgoole, CIO at DXC Technology.
“Even though the raw technologist will feel like the use of human oversight’s going to hold them back, in the reality it’s going to make the technology adoption go faster because it’s going to get people more comfortable with it. They’re going to understand it better. And then they’re going to start to trust it.”
“In the modern connected world, a bad piece of AI could make a really bad impact, really wide and really fast. …
This week’s guest is Pankaj Chowdhry, CEO at FortressIQ.
“The example is always used with the horse and buggy and the horse and buggy whip, and what happens when you transition that to the car. I don’t think they understand: in the analogy, we’re actually the horse.”
“One of the reasons that Silicon Valley is getting pretty maligned these days is that we move fast and break things. …
This week’s guest is UNC’s Mohammad Hossein Jarrahi, co-author of the article Could AI Be Your Next Employee of the Month?
Jarrahi is an Associate Professor at the UNC School of Information and Library Science, and a Fellow at Rethinc. Labs, part of the Frank Hawkins Kenan Institute of Private Enterprise at UNC Kenan-Flagler Business School.
“Organizations now have to think systematically about ways to optimize, at the high strategic level, the combination of their human and artificial capital.”
“Personification has really good affordances for establishing [human trust of AI] because that makes the interaction, the analysis of data, and just the communication process much more natural. …
This week’s guest is Seyward Darby, the author of “Sisters in Hate: American Women on the Front Lines of White Nationalism.”
That’s a heavy topic, and perhaps not an obvious one for an AI business show.
But a lot of the book deals with internet activity — the hate movement is online and of course algorithms shape how the message spreads. Darby has been swimming through this algorithmic stew, and she’s thought deeply about trends and groups that algorithms promote.
She’s got a unique and vital perspective on AI ethics, and we’re proud to have had her on Machine Meets World. …