Michael Charles Mayberry will tell you, “I’m really, really nerdy.” Since joining Intel in 1984 as a process integration engineer, he’s had a chance to “work with the best and brightest and let my inner geek run free.”
Over his career, he’s worked on EPROM, flash and logic wafer fabrication processes. He’s developed roadmaps and test processes for Intel processors. He’s led Components Research, Intel Labs and, until recently, the Technology Development group.
Now, after 36 years, Intel’s chief technology officer is retiring. In the below Q&A, Mike shares insights into his career, Moore’s Law, advice to other engineers – and the future Intel advances he’s excited to watch unfold.
Q: How did you first hear about Intel?
A: I was in graduate school at Berkeley and somebody from Intel dropped by and asked them, “You have any good students?” And the Intel recruiter ended up talking to me for a couple of hours. I started in Santa Clara working on a project called “cross-tie RAM.” It’s a magnetic memory technology. It was a secret project, which is why they couldn’t tell me exactly what I’d be doing until I joined Intel.
Q: Did you ever expect you’d stay at Intel for 36 years?
No. In fact, the person who hired me basically said, “Hey, Silicon Valley is a hot place. Try it for a few years and you can always jump over and find a different job somewhere else if it doesn’t work out.”
Q: So what hasn’t changed at Intel in your time here?
Way back then there was a book talking about the 100 best companies to work for. Intel was listed in the book as a place to work with the best and brightest. That hasn’t changed.
Q: How have you gained new perspectives over the years?
I know what makes a happy day for me. Getting things done is one thing. Learning something new is another, and that’s really what’s helped keep me going for a long time. I’ve moved around when I felt that the rate of learning was slowing down, and so I’ve changed major careers about every 10 years. The third piece is being surprised. A day that’s routine that has no surprises is just a day. Surprises can be both good and bad, so a day where I get stuff done, I learn something new and I have a pleasant surprise — hey, that’s a great day.
Q: Have you ever felt that the future of Moore’s Law has rested on your shoulders?
Not my shoulders personally, but I think all engineers that get immersed in it, they hear the story about Moore’s Law being dead. Moore’s Law doesn’t look the same now as it did 10 years ago, and it didn’t look 10 years ago the same as it did a decade before that. It’s always evolving but giving up is not something that I think is in our DNA.
Q: What advice would you give somebody just starting a career and somebody 10 years into their career?
I’ve given career advice many, many times. Usually for somebody who’s relatively new it’s that you need to make the best of what you have. Excel at where you’re at. You can’t come in and think, “Well, I didn’t really want this job. I wanted to immediately jump to something different.” That’s not the way the world works. You’ve got to go learn what you can about the job, and you can always be on the lookout for what comes after that, but at any moment in time, you can only live in the present.
For somebody a little bit further on, you have to know yourself, you have to know what you’re good at, you have to know what you like to do, you have to know what has market value. And if you have all those things together, you can plan a career around it, but there’s always an element of chance. Many of my opportunities have come because somebody decided to retire and that’s why you couldn’t predict it. I’ve never planned out my career for more than a year or so into the future.
Q: What breakthroughs from Intel will you be watching out for in retirement?
We have an interesting challenge with the direction of compute at this point in time. If you oversimplify a little bit, in the height of the PC market in the ’90s when everybody had to get on the internet and be productive, software was simple and the tasks were simple and you tried to build hardware that would run faster.
Today, there is such a broad range of workloads. AI workloads are different than graphics workloads, although in some cases they run on similar hardware. Single-threaded versus the kind of massively parallel tasks you do in the cloud are very different from one another. The notion that there’s one architecture to rule them all has sort of governed our existence for a long time. And winning the architectural wars was one of our many challenges along the way.
It’ll be interesting to see the next inflection point for computing.