From "Could We?" to "Should We?" and HOW.

I spent 30 years in rooms where the question was always the same: could we?  Could we bring it to market faster than our competitors? Could we make money doing it? Could we solve the problem the market was facing? Could we be the market leader in this category?

The goals were clear: drive revenue, outpace the competition, grow the customer base. And the stakes were real: shareholder value, customer satisfaction, job security. I understood those stakes. I believed in them. I was good at working within them.

I got caught up in it, honestly. It was easy to do at a company like Cisco, which was doubling its business every year when I joined. The energy was extraordinary. The momentum was intoxicating. And nothing we built had the capacity to do the level of harm that would have made me stop and ask a different question.  Until now.

I have been thinking about artificial intelligence since my college days, when a philosophy course posed what felt like a purely theoretical question: could a computer ever think like a human? It was an interesting intellectual exercise. Nothing more.

What I never asked then, what none of us asked, was whether it should.

That question did not become urgent for me until recently, when I stopped asking whether AI could replicate human thinking and started asking whether it could replace human beings entirely. That shift, from replication to replacement, changes everything. It transforms a technology question into a moral one. And moral questions require a different kind of leadership than the rooms I spent 30 years in were built to provide.

Because here is what is actually at stake now. Not just shareholder value. Not just market share. The very essence of how we live, how we work, how we learn, how we relate to one another and in the deepest and most serious conversations happening right now, whether the version of humanity that emerges from this transition resembles the one that entered it.

Throughout history, technology has enhanced the human experience. Automobiles and planes transformed travel. Agricultural advances fueled the industrial revolution. Computers and the internet changed every facet of life as we know it. Each of those transitions was disruptive, frightening to some, but ultimately navigable. We adapted. We survived. We often thrived.

AI is poised to do the same, but at a scale, a speed and a depth that surpasses anything that came before. And unlike previous technological revolutions, this one has the capacity to make decisions. To learn. To operate without human intervention. To outperform, outwork, and in many domains already, outthink us.

Which brings me back to the question nobody in those rooms was asking: should we?  Not "can we profit from this?" But should we build it? Should we deploy it? Should we move this fast? Should we put this capability in these hands? Should we allow this decision to be made by an algorithm rather than a human being?

These are not anti-technology questions. I am not suggesting slowing down. I am advocating for asking better questions before we accelerate. Here is what concerns me most: the people deciding may benefit at the expense of everyone else. Our legislative bodies are regional, slow, and frequently outpaced by the technology they are trying to govern. The most powerful voices in the AI conversation are not necessarily the wisest or the most ethical; they are often simply the wealthiest. And throughout history, the masses have repeatedly found themselves at the mercy of those with the most power, watching decisions get made in rooms they were never invited into, living with consequences they never consented to.

I have spent my career in those rooms. I know how decisions get made. I know what questions get asked and which ones get skipped. And I am telling you: the "should we?" question gets skipped. Almost every time. Because it slows things down. Because it complicates the revenue model. Because it requires sitting with uncertainty rather than charging toward a launch date.

"Should we?" is already out in the world. That question has been asked. The technology exists. The race is underway.  What we need now is a better question.

Not "could we?" Not even "should we?" but HOW should we?

How do we develop AI in a way that augments the human experience rather than diminishing it? How do we ensure that the benefits are shared rather than concentrated? How do we build the governance structures, the ethical frameworks, and the cultural norms that keep the most powerful technology in human history pointed toward human flourishing?

I do not have all the answers. Nobody does. But I have spent 30 years learning how organizations make decisions, how markets move, how technology gets adopted, and how people respond when change outpaces their capacity to understand it. And I have spent the last several years studying AI strategy and ethics specifically because I believe this is the most important question of our time.

The shift from "could we?" to "should we?" was the beginning of wisdom.

"HOW should we?" is where the real work starts.

That is the work I am here to do.

Experience