Larry Bradley and the Art of Caring

This is part of our series on the inaugural inductees of Maestro Group’s Executive Hall of Fame. These 12 individuals have been honored for their dedication to advancing their employees from salespeople to sales professionals, holding their teams accountable, treating sales as a science, and modeling best practices within their organizations.

May 14, 2025

By Alicia Oltuski

At the end of Maestro interviews, we like to ask our subjects what questions we didn’t ask them that we should have. Larry Bradley, the CEO of SolasAI, answered this question as follows: “A couple of the times that I was wrong.” This suggestion is typical of Larry’s humility, and this humility contributes a lot to his professional and personal tendencies. He is an active volunteer for people who have recently completed addiction-treatment programs, “people who are trying to get back on their feet.”

Sobriety, he says, as well as the volunteer work he does, has been a big part of his journey—including his work in AI. “We definitely would like to be financially successful. There’s no doubt about that. But I think a more important thing to all of us is that we want to have that impact…a significant part in helping the world use and take advantage of the promise of AI and minimize the downside. That’s why I want to do this: because I do believe there’s a lot of great opportunity for good and positive impacts…”

So, what is “this?” See below.

PREDICTIVE/BIASED MODELS

We live in a society that hasn’t really figured out AI and hasn’t really figured out equality, let alone equity. (Here is a helpful explanation of the difference between the two.) That’s not an auspicious combo, and it’s something the business community hasn’t really reckoned with. Case in point: predictive models. Looking for a loan? Your bank is going to lean on one or more predictive models to try and figure out how likely you are to pay back that loan. Tons of industries use predictive models. But models can be biased, because often the data the model relies on is biased. Regulatory bodies have started to react to this problem, which, of course, scares a lot of people.

But those people, says Larry, are overlooking an important piece of the predictive-model puzzle: it’s gigantic. In some cases, virtually limitless. AI has only multiplied the number of suitable models that businesses can access in a very short period of time for a variety of use cases. AI can contribute to the problem of biased algorithms, but it can also make algorithms fairer—which, in turn, protects businesses from regulatory violations and findings. These were Larry’s goals in developing SolasAI alongside his co-founder Nick Schmidt.

IT→MBA→CEO (WITH A FEW STOPS IN BETWEEN)

Like many of our Hall of Famers, Larry Bradley played competitive sports (football) as a young person. He played all the way through college. “I should have stopped a couple of years before I did…” He sustained a slew of injuries. “I am jealous of the guys who stopped their freshman and sophomore year. I do think the worst damage was done junior and senior year.” As captain of the team, though, Larry learned something about management: “A leader is not somebody who gets people to do what they don’t want to do. A leader is somebody who marshals and provides an example to people,” in order to help them accomplish the goals they are passionate about.

After Larry graduated, he went on to earn an MBA at Georgetown University (where he’d also served as manager of client services and chief IT architect), then became a senior consultant—and subsequently a director—at Gartner, a firm known for both its consulting and its popular industry reports. Over the next decade, Larry won several influential positions at a variety of tech companies. What moved him into the startup space was a desire to “be in the middle of the action.”

GUARDRAILS NOT OPTIONAL

When Larry talks about the beginning of SolasAI, he talks about Nick Schmidt. Nick worked with Dr. Bernard Siskin, known as a pioneer in court-admissible statistics.

In 2020, Nick and Larry got together to use this knowledge in machine learning. Nick had applied the mechanics behind what would become SolasAI software to various well-known companies in the financial and healthcare sectors. He believed it could allow businesses to lessen both discrimination and risk while maintaining accurate models. But he felt he needed Larry to make it happen. “As he likes to say, he had heard so many times that consultants make [bad] software. And so, he wanted to bring in people who had experience building software products and then taking those to market. And that’s when he called me…”

Although Larry and Nick are driven by purpose, their users don’t necessarily have to be. Making fairer models means making models that are less susceptible to regulatory, legal, and reputational risks, as SolasAI’s landing page points out. A big part of their customer-education load is refuting the false fear that fairness must compromise the value or accuracy of a model. Larry explains, “I think that’s probably the thing that surprises other people the most…when you think of these AI models, or you think of these machine-learning models, most people think there’s one right answer, and that’s almost never the case…There are so many different ways that you can select which model is best. It does give you lots of options to improve things like fairness without hurting any of the performance.” In fact, for many industries, the software sometimes identifies higher-performing models.

The key lies in the guardrails. “When we hear people saying [AI] just needs to be unrestricted, that’s like saying, ‘Hey, let your kids go be feral, and it’s all going to be fine.’ No, we’re going to end up in some dystopian future if we don’t do this intelligently and intentionally, and manage for both business upside, but also manage those risks. And anybody who manages a business knows you have to both manage for opportunities and manage risks—saying that that’s not true with AI is just foolish.” Today, SolasAI is growing, and Larry hopes that it will do its part in letting people use AI for good.

P.S.

Want to know a mistake Larry volunteered at the end of our interview? Okay, here it is: the first person he hired for one of his previous startups, he told me, was incredible, “and I was sure I needed to find his clone.” As they were hiring, Employee #1 approached Larry, and he said, ‘Larry, I don’t think we’re looking for the right person. And in fact, this one person that we interviewed that you said no to, I think, is the right person.’”

Larry disagreed. He explained his reasoning and turned down the advice. About two days later, his employee reapproached Larry, saying, “Larry, I really think we’re looking for the wrong person.”

“I said, ‘No, no, we’re looking for the right person. That person that you think we should hire is not the right one.’ And when he came back to me the third time, I was like, you know what? He really believes this, and he must be seeing something that I’m not.” He hired the person his employee had suggested. “He was 125% right. The two of them were a dynamic duo and had a huge impact on the organization. We didn’t need a clone. We needed basically the other puzzle piece, and that, I think, was a really, really important lesson for me.” Today, when Larry finds his team members feeling very strongly about something, he listens.

You can learn more about Larry here. Be sure to congratulate him while you’re there!