
There's a version of career advice that has felt true for a long time.
Get deep in one thing. Become the expert. Own a vertical so completely that you become the person people call. The T-shape model, broad surface with one deep spike, has been the framework for a while now.
That advice isn't wrong. But I think the T is shifting.
AI got really good at going deep. Not at everything, and not perfectly, but direct an agent toward a specific problem with enough context and it will outpace most generalists and keep pace with many specialists. It can write the SQL, parse the research paper, draft the legal summary, generate the code. Depth, for a lot of tasks, is becoming a commodity.
What AI is genuinely not good at is ranging. Connecting a pattern from genomics to data engineering to agent orchestration and knowing why that matters. Holding five different mental models at once and deciding which one applies. That's not a training problem. That's a life problem. And it's one generalists have been quietly solving for years.
A generalist carries a lot of surface area. They might have a foot of understanding across a dozen domains, not deep, but real. Enough to have a conversation, ask the right question, recognize when something's off.
With AI, that foot becomes a foundation. You point the model at the vertical, give it direction, and now you're operating with something closer to depth, on demand, across all of them.
A specialist with a narrow spike is still valuable. They can verify. They can catch what the model gets wrong. That matters. But they're working in one room. The generalist is moving between rooms and knows which ones to open.
I didn't plan to be a generalist. I don't think anyone does.
I grew up in Kenya, came to the U.S. for college, joined the Army, spent years in medical labs and genomics research, taught myself Python from a book someone handed me, enrolled in a CS program at Penn, became a data engineer, then a developer advocate, and now I build agentic systems in production. None of that was a strategy. It was just following what felt interesting at each turn.
But the through line is surface area. Each of those environments gave me a different mental model. The lab gave me rigor, you don't trust output you haven't validated. The Army gave me structure, define the mission, assign the resources, execute, debrief. Growing up in a different country gave me something harder to name, a kind of distance from assumptions that most people carry without knowing it. You learn to observe a system before you assume you understand it.
All of that is what I draw on now when I'm working with AI systems. Not any single deep expertise. The range.
So the question isn't whether you should be a generalist or a specialist. Most people don't get to choose cleanly anyway. Life makes you both things at different times.
The better question is what have you already lived through that you're not counting.
The career change you made five years ago. The industry you left. The country you grew up in. The job that felt like a detour. Every one of those things changed how you see, and that's what you're actually working with now.
For a long time, the quiet anxiety of being a generalist was feeling one step away from being found out when something got hard enough. That you needed to pick a lane, go deep, stop moving around. That breadth was a liability.
That's inverting. The depth you need on any given problem is increasingly something you can get to. The range you've built over a lifetime isn't.
The people who are going to build the most interesting things over the next few years aren't necessarily the ones who know one domain the best. They're the ones who can hold multiple domains at once, move between them, and point the right tools at the right problems.
There's a good chance that's already who you are, and you just haven't had a reason to call it an advantage until now.