
This article originally appeared in the San Francisco Chronicle on September 19, 2025
Nearly everyone who attended the AI conference at San Francisco’s Mission Rock this week had something to sell: an idea, a startup, an obscure software product, an abstruse patent, a dream.
A few people who had lost their jobs to artificial intelligence were hawking resumes.
“It’s really the good, the bad and the ugly,” said David Greenfield, a former New York City Council member who now runs a social services nonprofit, where he is trying to incorporate machine learning without laying anyone off.
During a whirlwind 36 hours in San Francisco, Greenfield rode in his first Waymo robotaxi, disembarking at the massive, warehouse-style conference center at Pier 48. Inside, he listened intently as people in lanyard badges discussed all the issues that have captivated San Franciscans since OpenAI unveiled its first iteration of ChatGPT three years ago.
“I’ve met people who are deploying AI to do research on degenerative brain disease,” Greenfield said. “And then I’ve also met out-of-work programmers, or people who hop from one conference like this one to another, and just seem a little lost.”
AI has already spurred a post-pandemic Gold Rush in San Francisco, and a calendar littered with conferences and conventions, of which the one at Mission Rock is a standard-bearer. A bystander need only roam the conference’s sprawling trade show to see entrepreneurs vying for attention and seed money. At individual booths, CEOs made elevator pitches, brandished business cards and distributed all types of swag: stickers, canvas tote bags, tube socks in company colors, Ghirardelli chocolates, tiny stuffed animals, Rubik’s cubes, phone chargers, refreshments made by jamming straws into coconuts, a camera that snapped flattering AI headshots with an optional rock star motif.
Many attendees seemed palpably excited about the future of AI — which, incidentally, was the conference’s theme. Yet no amount of exuberance could hide the anxiety underneath, both from unsettling prophesies that artificial intelligence will transform society in irrevocable ways, and by the fear of an AI bubble bursting after an MIT research paper found that 95% of AI pilot programs stumble.

“Well, everyone here keeps quoting that 95% fail statistic,” said Aimee Lefebvre, director of customer experience at the medical tech company AbleNet. She came to the conference with co-workers, mostly for the educational value.
“We’re not trying to replace anyone” with a bot, Lefebvre said.
It may be fitting that one of the most popular seminars Thursday, “Why Your Organization Is Built to Fail at AI (And What to Do About It),” drew a standing-room crowd of people seeking guidance on how to “rewire” corporations and by some alchemy “reskill” all the people with suddenly outdated skills in graphics or programming. Garth Andrus, who delivered the talk, spent nearly an hour afterward addressing a scrum.
Then there was the cross-section of businesses in the exhibition hall, most of which fell into one of two categories: About half of them were dedicated to some form of infrastructure build-out, providing the tools or software for companies to add AI “agents” to their workforces. The other half, by contrast, were selling “guardian” agents to watch and manage the other AI, on the notion that it could spiral out of control at any moment.
“You need guardrails,” said Ibby Rahmani, vice president of marketing at Trust3 AI, a San Francisco startup that produces an AI “accountability” platform, directed mostly at insurance companies and banks.
Artificial intelligence is unpredictable, Rahmani explained. The agents can give out bad information. They easily violate rules pertaining to privacy and personal data. Human workers only add to the chaos, he said, because people now feel empowered to create their own AI bots without telling management. Hence the need for companies, like his, to help police this wild frontier.
“We are going through a honeymoon period, but then reality will hit,” Rahmani warned, striking a sober tone.

Trust3 wasn’t the only company focused on managing and restraining AI. Others, such as Markup, promised to review and fix AI-generated content so it complies with a company’s standards for “tone, quality and accuracy,” said CEO Matt Blumberg. Still others appeared fixated on reorganizing the world’s information so that AI bots can consume and process it. Nexla, a San Mateo startup, specializes in “connecting” data from different systems — say, orders from a wide variety of restaurants or food stands that all get funneled into one delivery app, said founder Avinash Shahdadpuri.
Amid all these founders and marketing professionals were people who looked a bit older, or who wore business casual attire rather than a polo shirt with a corporate logo. Some wandered alone through the cavernous venue, clutching backpacks or the complimentary boxed lunch. A few said they had come to the conference out of curiosity, or with an abstract hope of being discovered.
Among them was Andre Thompson, a teacher at a technical college in Georgia. He had written a patent, he said, for technology that would fact-check AI for hallucinations, bias and “topic drift.”
Thompson found the San Francisco conference through an internet search and saved up money to go, believing he might find experts to peer-review his patent. After failing to grab anyone’s attention at the vendor booths, he stood up during a Q&A session Wednesday after a panel on ethics.
“I was the kid in the back of the class raising my hand, ‘Pick me! Pick me!’” Thompson recalled with a sheepish smile. Nervously, he introduced himself, described his company, and said he was looking for collaborators. By that time, Thompson — who is Black — had gotten the unsettling sense that people might respond more enthusiastically “if I had been of a different pigmentation.”
But after the panel, a tech worker from Italy approached Thompson, saying the patent was interesting. Maybe the conference had paid off.
Originally written by Rachel Swan