Home > Women Leaders in Technology > Q&A With Morgan Gregory

Q&A With Morgan Gregory

Google Cloud Office of the CTO: Strategy & Programs at Google

April 30, 2020
SHARESHARESHARE
Morgan Gregory, Google Cloud Office of the CTO: Strategy & Programs at Google runs strategy and programs for Google Cloud's CTO Office, working closely with their senior leaders and technical CXOs of strategic customers on co-innovation and thought leadership initiatives.

Morgan has a passion for paradigm-shifting technology and has side hustle in AI. She's particularly interested in AI for Good and Responsible AI - topics that marry her technical background and desire to make a difference.

Deck 7: Did you always know that working in technology was what you wanted to do? What made you to work in computer science?
MORGAN GREGORY:
I didn’t always know that I wanted to work in technology. Growing up, it was actually my brother that people predicted would have a career in tech (he now owns and runs a craft brewery in Gibsons, BC).
I always loved math and science, and in my first year and half of university, I took courses that ranged from organic chemistry, to modern stellar astrophysics, to molecular cell biology. I also took a computer science course thinking it would be a valuable tool in whichever field of science I chose to major in. Instead, I fell in love with computer science -- with the creative problem solving and the potential to accelerate innovation across industries -- and ended up choosing it as my major.

I find that these are the same reasons I love my job today as a member of Google’s Office of the CTO (OCTO). Technology continues to offer creative problem solving opportunities, and the work we do in OCTO helps to catalyze innovation with our customers around the world.
 

"Technology continues to offer creative problem solving opportunities, and the work we do in OCTO helps to catalyze innovation with our customers around the world."
D7: It’s no secret that women in technology have felt their gender has affected the way that they are perceived. Have you ever come across a situation like this?
MG:
I have, unfortunately. I can think of notable examples from my undergrad in computer science, and in each of my jobs.

Here’s one: at a social event at an offsite I was chatting with someone I had just met about the video game industry. I’m not a gamer myself, but my husband has worked on some big titles over the past decade, so I know a thing or two about the industry. I had just met this person, he knew that I worked in OCTO, and as he was leaving he made a comment with an assumption that I was an ABP (Administrative Business Partner).
I have a huge amount of respect for the ABPs I work with.

Our team would absolutely fall apart without them. But I’m not an ABP, and he made an assumption about my role based on my gender. Now, these types of things might frustrate me in the moment, but then I move on. I can’t get enraged about every one of these instances - I wouldn’t have enough energy left to do anything productive!

What I appreciated about this instance was that my teammates heard and were enraged on my behalf. They reached out to my manager and he both checked in with me, and also took corrective action with that individual. The strong message that I received from my team was that my teammates had my back, and my manager valued me. Having this type of support from my team lets me focus on the impact of my work without worrying about how I’m perceived by them.

D7: What do you find the best part of your work that you want to share with us?
MG: 
I love that I have the opportunity to work on challenging projects, with the potential to have a huge positive impact on the world, with some of the smartest people I’ve ever met. One of the topics I’m particularly passionate about is Responsible AI.

My introduction to Responsible AI happened last year when I met Timnit Gebru, a researcher at Google that had worked on a paper called Gender Shades. This work showed that some AI models look like they have high accuracy when you evaluate them across the entire population, but when you look at subsegments, they can perform poorly for groups that are underrepresented in your training data. For example, the facial recognition models evaluated in her work uncovered error rates of less than 1% for lighter skinned males, but up to 35% for darker skinned females.

This work is an example of fairness in AI (if you’re interested in learning more, I wrote an article on this topic in Forbes), and it’s one important pillar of Responsible AI. At Google, we’ve codified our approach to Responsible AI (including fairness) through our AI Principles. It’s inspiring to see these come to life in our research team, our policy and governance organization, and in product development groups across Google.

This is a new and growing field of AI, and while there’s still a lot of work to be done, many of our customers have been looking to Google for lessons learned. The part of my work that I love the most is when we find collaborative innovation opportunities with our customers; projects where we can share some of the work we’ve been doing in Responsible AI, but also where we can work together to learn something new and contribute to the advancement of this important effort for the industry.

"Timnit Gebru's work showed that some AI models look like they have high accuracy when you evaluate them across the entire population, but when you look at subsegments, they can perform poorly for groups that are underrepresented in your training data."
D7: What’s your typical day like? Does it even exist for you?
MG: 
I’d say that I do have a somewhat typical day, although it’s changed recently due to the COVID-19 measures taken to help flatten the curve. Before March, two days a week I’d go in early and do a workout at work. My days would then be a mix of meetings and some focused time where I could get work done. After work, I’d pick up my kids from daycare and we’d do dinner and bedtime as a family.

Over the past 6 months, there was also some travel mixed in, usually to conferences where I’d either go to learn about the latest AI research (NeurIPS, FAccT, AIES), or I’d go to speak (Google Cloud Summit, Global AI Conference, O’Reilly Strata Data and AI, Women of Silicon Valley).

But since the COVID-19 pandemic, my typical day is very different. I feel so fortunate to be in a job that allows me to work from home, and at a company that understands the challenges that parents of young kids face when daycare closes. Since mid-March, I’ve been at a reduced work capacity as my husband and I take turns with our two daughters (4.5yo and almost 2yo).

I’ll typically start my day with a run or a HIIT workout via video conference. Then, when it’s my turn to work, I’ll set up my work area at the kitchen table and settle in for the day. Our team has a team Google Chat and a “water cooler” video chat that anyone can pop into to say hi, and I find those great tools for feeling connected while we’re all working from home.

"Responsible AI is a new and growing field of AI, and while there’s still a lot of work to be done, many of our customers have been looking to Google for lessons learned."
D7: We can see your career spans over 20+ years. How would you summarize the journey?
MG: 
When I think about my career, it’s had 3 major chapters so far.

Chapter 1 was technology focused with a computer science degree, a job as a software developer, and a job as a product manager.

Chapter 2 was business focused with an MBA and some time in management consulting to build out a business and strategy toolkit and gain experience across industries.

Chapter 3 has been bringing those two together during my time at Google Cloud, helping to build OCTO and sitting at the intersection of technology and business.
I’m hoping Chapter 4 will bridge the work I’m doing now with my desire to have a societal impact. The Responsible AI work I mentioned before is a perfect example of this and I’m excited to find new ways to contribute to that effort.


About Google Cloud CTO Office


The Office of the CTO (OCTO) is a small team of senior technologists that partners with the technical executives of some of Google Cloud’s biggest and most strategic customers, through what we call collaborative innovation. We work with these customers to understand their business priorities, and then together find new ways to achieve those using technology (including Google Cloud products). One tangible example of this was when we worked with a Financial Services organization to help them reimagine customer support. This collaboration was so successful that we ended up creating a new product called Contact Center AI. You can read more about the journey here.