Episode 45: Transforming Data into Outcomes: Lessons from the Public Sector to Approach Sustainability

Dave Karlsgodt 0:00

Welcome to the Campus Energy and Sustainability Podcast. In each episode, we talk with leading campus professionals, thought leaders, engineers and innovators addressing the unique challenges and opportunities facing higher ed and corporate campuses. Our discussions will range from energy conservation and efficiency to planning and finance, from building science to social science, from energy systems to food systems. We hope you're ready to learn, share and ultimately accelerate your institution towards solutions. I'm your host, Dave Karlsgodt. I'm a Director of Energy and Sustainability at Brailsford & Dunlavey.

Natasha Nicolai 0:36

I like to say we're not only moving the needle on the spectrum, but I'm trying to move the spectrum itself in terms of what our expectations are. And so that is a good way to work backwards. And so that was actually how we motivated the entire state. And then eventually the feds picked it up to change the nature of how we thought about measuring what matters. At the time, five, six years ago, that was quite the daunting task. That would have been a multi million dollar effort that likely took over a year and a half, that I would have had to pay some third party to come in and do for me today, we can do that in a matter of hours to the tune of 10s or hundreds of dollars, with incredible accuracy.

Dave Karlsgodt 1:17

In this episode, we interview public sector data and digital transformation leader Natasha Nicolai. Natasha shares lessons from her work as a researcher and a team leader working in and for state government. Our wide ranging conversation challenges traditional thinking on public sector data. We talk through elevating evidence into tangible policy change, leveraging advances in data science to unlock outcomes in the public sector, and the energy and sustainability implications of the AI boom. Her insights offer a compelling exploration of the intersections of technology,sustainability and innovation. I think this conversation will leave you inspired and ready to accelerate change in your own organization. I hope you enjoyed this conversation recorded on December 18, 2024.

Natasha, it's great to have you on the campus energy and sustainability podcast.

Natasha Nicolai 2:12

It's good to be here looking forward to the conversation.

Dave Karlsgodt 2:14

Great, well, one thing that's fun about running your own podcast is you really can choose the topic, and even if it doesn't necessarily fit the theme of the show, and you can somehow figure out a way to stretch it back in. One of my favorite podcasts I listen to, David Roberts sometimes gets a little carried away with like bringing on musicians that he just likes, and then trying to tie it to his topic. So I'm not going to do that. I think this is a little more relevant. But at first blush, bringing in somebody that talks about data systems may seem a little odd to sync up with campus energy and sustainability. Certainly, my marketing director was a little puzzled when I proposed you as a guest, but I'm actually really excited. I think this may be one of the more relevant topics for our listeners this year. So I think we're going to get into all sorts of fun stuff. Maybe we'll touch around some things like AI and data centers, but your actual expertise is a little more fundamental than that, so I'm looking forward to how you framed this up today. But can you kick us off? Just tell us a little bit about who you are.

Natasha Nicolai 3:08

Well, first, let me start by saying thanks again for having me, Dave. I'm always grateful for an opportunity to have meaningful and interesting conversations with smart people who want to generate quality information and stoke thoughtful exchange. I do have a bit of a unicorn, I would say, career trajectory that started with a clinical doctorate in orthopedic and rehab medicine when I was pretty young, then took a turn toward public policy and earned me a second graduate degree and some really good mentors along the way, which then reoriented my career, starting with mathematical policy research and eventually leading to a few appointments in the Newsom administration in the state of California, first is the state's TANF director, and then as a chief data officer and health and human services agency. And eventually, one more right turn to get me where I am today, and that is into cloud infrastructure and modern technology services with a focus on large data projects. I currently lead a team at Amazon Web Services that is dedicated to public sector data modernization and digital transformation in the form of both research and development, where we are looking to drive thought leadership and sustainment of innovation in the public sector spaces. And so we're constantly researching and developing around sophistications in complex public infrastructure to better align with industry standards, as well as advancement in skills and practices and actual implementation of those modern data environments at scale for innovative and ready public sector entities who I think are trying to move the needle for state investment and federal dollar spend to drive outcomes and to demonstrate value really breaking free from what is a legacy of norms and protected procurements and insane prices and 400% markup on deliverables that are out of date by the time they're usable and dependencies on third party teams who don't really have their best interests in mind. And so now that you know how I really feel, I will also lead off this discussion by stating explicitly that everything I say today is my own personal opinion and statement as a human again, nothing I say is meant to representthe opinions or realities of Amazon Web Services, my employer.

Dave Karlsgodt 5:16

Perfect. Well, I guess what I was most interested in was just sort of your articulation of usable systems, rational processes like simplicity, sort of anti Byzantine system, sort of the opposite of bureaucracy. It's a very specific passion todevelop, especially for somebody who started more healthcare kind of perspective. So talk to me about how that came to be, and then we can sift through that..

Natasha Nicolai 5:40

Sure, well, I do like to say oftentimes, when I'm leading off in these conversations, that while it might feel or look disparate my career trajectory, and yes, while it is a bit of a unicorn career trajectory, there are definitely through lines there. One of those through lines is systems thinking and analytics, and the other is data driven or evidence based practice. And one of the through lines associated with that data driven decision making or evidence based practice mental model, is a very early learned and then sort of just reality based look at how difficult implementation is, so studying implementation science, but also change management and the way that humans make decisions and think and really the intricacies and challenges of getting humans to change their behavior based on data. And so whether it was in medicine and in orthopedics, as the world was evolving and technology was modernizing in that space, functional fMRI, you know, didn't become a particularly common tool to use in orthopedic medicine until the early 2000s when I was starting to practice and so, sort of like everything we knew about how shoulders and hips and lumbar spines moved was upended and changed with really hardcore clinical evidence, you know, that we had been wrong, and now there was a new way of thinking. And as I watched my colleagues really struggle to grapple with that reality of being wrong and admitting that and moving on and doing better and being better and driving different things, I saw a lot of that same behavior pattern happening in public policy and in other forms of medicine, and so I don't think it's thatsurprising for someone who's rational like myself and who really enjoys analytics and undoing assumptions or only holding on to assumptions when there's reason to believe them, data just becomes an obvious through line. And so, you know, I think as I moved into my career in public policy, it was so fascinating to watch millions and millions of dollars spent by the federal government to research and understand and comprehend effectiveness of public policy, only to find out over and over and over again that there were these big limitations around implementation and fidelity and understanding what was actually happening on the ground, and that a lot of those were driven by the limitations of data sharing, and those limitations in data sharing were driven by limitations in technology or ways that technology had been deployed and delivered that really took ownership of that data and that understanding and that involvement from the states and from the feds and gave it to third party contractors and made these dependencies really hard to overcome. And so I became very motivated to elevate evidence based practice, and, you know, really drive towards a social contract that I think is apolitical which is we should know and understand and feel good about the things that we invest in, and be able to prove that they work, not only because I believe government is good and should be good and can do good, but that we also should have an accountability to demonstrate that we know how and we know why, and that it is real, and that you know it's not about having faith in a system that you can't see. You should be able to see it. And that is very, very possible.

Dave Karlsgodt 9:03

Interesting. So, yeah, it's basically like the functional MRI machine for medicine. You kind of want to create that for public entities. Is that maybe an oversimplification, but a decent metaphor...

Natasha Nicolai 9:13

I think that's a fair parallel. That's right. I think not only you know can we know, but I think we should know, and we should care to know, and I think that comes with the same token of being willing to admit when you don't know what you don't know, or be honest with yourself when something isn't working. I think those are the same vibes that you see in medicine when we struggle to get older providers, perhaps to change their behavior, or the same things that we see in legacy IT systems and older CIOs, who have pride of ownership or ego in the development of kind of where we get to,and it can become increasingly difficult in those situations to make better and different decisions. That's not always the case. And certainly, you know, I would say that the some 30% or so of the customers I work with today are proving thatthat doesn't have to be the case, right? There are people that are motivated and proving it, and I guess my hope is that by accelerating that and delivering additional value there and elevating those stories and those messages, we can get more people to not only believe it's possible, but then actually do it.

Dave Karlsgodt 10:23

Yeah, well, as a recovering software developer, as I've described myself, I can speak from personal experience that a lot of the systems that you're talking about that's a pretty common challenge, like, how do you get the data out of this system that doesn't talk to that system? How do you get the people in that public entity that can have the domain knowledge to even know how to store the data, or how to use the data, or even know that it's possible to do things with it, right? So I was always felt kind of like a magician coming in from the next town over, like forming magic tricks when I would come into these locations, just because I had some basic software development skills and access to some skill sets that a lot of other people didn't have. So made for, you know, good first career that I had, which was great, but it's interesting now, as the amount of information, the amount of systems that have proliferated, yeah, there's just a lot. It's a much bigger problem and and opportunity, which is, I think, where you're saying, Well, I guess one of the things I'd like to hear a little more about. I know you really kind of honed these skills during your time in the Newsom administration. Can you talk about some of those stories? Give some examples of how this played out in reality?

Natasha Nicolai 11:24

Sure, some of my favorites actually come from my time at mathematical policy research when we were looking at the application of brain science and the development of our understanding around two generational approaches to poverty, and some of the work that was being done at the time at the Stanford poverty lab and at the Harvard Center on the Developing Child. And this recognition, again, that the policy world and the science world had evolved. We knew a lot more about what it took to make effective safety net programming and to update our models around what good case management within those systems looked like. The hardest part was proving, really, at that point, to thousands of case workers in the state that was held accountable to old federal regulations, that there was value in changing behavior, and so we spent a lot of time trying to drive data and evidence. For example, in the TANF program, we look at a metric that's very old and still today, is what states are held accountable for, called the work participation rate. And within that, there is this assumption that the thing we then care most about is getting people into work, when the reality is what you really care about is having resilient individuals who can recover from disruptions in their work experiences, right? Which means that you also want people then who have strong self regulation and strong executive function skills, and what we called, at that time, from the Harvard Center, adult core capabilities. And what that meant then is that you had to also let go of assumptions that had never been proven in the data, like, Oh, if they just work hard enough, they'll get there. Oh, you can just pull yourself up by your bootstraps. Those are things that we knew for a fact from the data were not true. And so as we were trying to then shift towards what were the things that we did understand from policy and science and human dynamics and brains and decision making, how did you get people then to apply those lessons learned in a model that was previously incentivized by a completely different structure, and in doing that right, we got very in the weeds around data and systems and those case management and programs, but also the recognition that humans don't live in silos. An individual, you know, in our case, on average, we're talking about a young mother with one and a half children who is trying to navigate a difficult situation and is experiencing some amount of scarcity, right? That's not the only thing going on in their life. And so you've got to be able to look across these systems and understand things about mental health and housing stability and physical health and transportation, right? All of those things are going to feed into not only someone's ability to effectively navigate what is otherwise a very complicated scenario, situation, policy landscape, but also meet their needs in these other ways, if what you want is success and stability and self sufficiency, ultimately, over time. And so we found all these interesting things in the data, but Lord it was really hard to get the data into a format where we could make those arguments, and that was when I really learned that all of that work we had done at Mathematica was not unique to any one of those programs or situations. It was ubiquitous. It's happening everywhere, all of the time, right? And so when you understand, you know, and you do, given your background, how technology has evolved, it's not that surprising sort of how we got here. What'sa little bit scary and surprising is that we're still here, right? The technology has now moved substantially and made things that were actually really difficult to do five or six years ago much easier. And so what was really helpful for me in thie State of California was the ability to demonstrate the value in investing in data and data science and data aggregation, to make the case, to undo the assumptions, to paint the picture, to drive better policy decisions and better resource allocation. Once you obviate it, you know there's no going back. Then people want more of that. And so it did lead to natural investments and shifts in the way that we thought about doing business. I mean, frankly, that Chief Data Officer role that I ended up taking didn't exist before I was the TANF director and demonstrated a lot of value driving around how we thought about policy and resource allocation, etc. And then we had a department and an agency who said, Oh my gosh, all this value that you're driving in TANF, you could be driving that in every one of our 13 major policy areas and programs, you know, from Child Welfare to housing to in home supportive services and beyond, right? And sothey actually created that role specifically so that I could step into it so that we could start to figure that out. And I wasn't alone. Fortunately, I was one of five chief data officers at the time named by the Newsom administration to go tackle these problems. So then I had, you know, some reinforcement. But you know, the hardest thing is not getting people excited about data, believe it or not, which I actually thought was going to be the biggest challenge. I thought for sure, after I gave my first few presentations, I made presentations. Hey, data is not scary. These are things you should want to know and be interested in knowing. I thought it'd be really hard to get people bought into that, and it wasn't. The hardest thing was actually getting the systems to do what we needed them to do, to actually allow people to have access and to federate access to data and to consolidate data. So that was a pleasant surprise, but a completely different challenge, you know, that led me to my current career during the I was the chief data officer, you know, during the pandemic. And talk about, you never working harder in your life. We were able to really unearth some very quick ways to solve for these problems in an environment where we had the authority and the permission to go fast and to break things and to do that, which was huge. I think the disappointing thing for many of us today is that we didn't take a lot of those lessons learned during the pandemic forward, and in some ways, you know, wasted that emergency in that we drove a lot of value by consolidating and accelerating a lot of data projects, but we didn't really learn from that. What was wrong with the systems before we kind of recreated them. We did the same thing we had done before, just in a faster, better way, and that was give away the keys to the kingdom, to, you know, other third parties. And you know, now you've got states kind of trying to pull it back. And so, yeah, I think I learned a lot in that time translating between policy and policy research and data and data science, and then data science and technology systems in those years in California, and I still have a lot of you know, friends that are still working for that administration, trying very hard to make these things real. And I will say, and you can probably speak to this as well. I imagine you see this across the higher ed and other landscapes as well. It is really hard to undo legacy infrastructure. There's a lot of fear and there's a lot of risk. And so to a question you'd asked a few minutes ago, being able to bring a really hyper rational risk reduction perspective to that and demonstrate not only the value but the safety in doing those things has been probably one of the most important things I figured out how to do in my career over the last three and a half years so that people will act. It's sometimes it's easy to convince them that they need to act, and then hard to get them to actually take the first step. Yeah. So, you know, there's really a thing that we have focused on is, how do you, how do you take people on that journey, holding their hand, putting one foot in front of the other? And a lot of times, you know, that's why I push my team and myself really hard to make sure that we're not experimenting with customers. We're not hoping it works. We are working very hard before we put things in front of public sector consumers to make sure that they work and that they work at scale. We don't sell black boxes. My team doesn't try to convince people that we know what we're doing. If we don't, you know, we're very honest, and I think that creates a space not only to be a trusted advisor, but of psychological safety oftentimes, you know, to a point I made before, we really need people to be willing to admit that they don't know what they don't know, or you can't make progress. And so somehow creating a space in which that is welcome and safe and encouraged and invited and applauded is really, really important.

Dave Karlsgodt 19:25

And you know, I think this is this one hits a chord with me. Specifically, one of the things we're often doing in my work is helping. I say, well, a common project site for us would be like a Climate Action Plan. Or the first step of a climate action plan is, okay, what are your current greenhouse gas inventory? You know, what does that look like now, doing that for one year, you know it's, it's some work, but it's pretty straightforward. And usually there's about, there's really about five numbers that you need to really figure to get pretty darn close. And then you need a lot more work to get, you know that last 5% kind of dialed in. But if you want to show it over, like the last decade, because if you're going to go in front of your board. Your CFO or CEO or your president of the university, whatever it is, you basically need to say, you know, here's our long term trend, explain it, and then have confidence in it. And then say, now that we have that basis, we can start thinking about future decisions, right? And, you know, they they want to jump straight to, okay, what should I do now? Tell me what you think it is, and then I'll poke holes in it. But if you have like, the data was collected by somebody else the last year, and then the previous year before that, it was an intern, and they did a different methodology, and then you're trying to make sense of this stuff, but it's all historic, and it's already been approved, and sort of feels official. It's like nobody will do anything with it. And I, I am routinely calling, and I feel like I'm doing data therapy for folks saying it's okay to fix your history. And one of the expressions I often use is, you know, those who refuse to fix their history are doomed to explain it, you know, endlessly. So I don't know if, how did you kind of work through that? What are some of the methodologies you use to I agree you don't want to go in front of that board or that customer, or whoever that important group is, until you and your team have really worked through it and have confidence in it. But what do you do to gain confidence in it? Because I find that's that's the creative kind of interplay part maybe isn't talked about as much.

Natasha Nicolai 21:08

Yeah, it's a great question, and this is something I use all the time when I'm trying to structure matrix decision making. I often will go back to my policy education and think about the Eightfold Path and creating criteria and helping people confront the status quo versus the next best thing. But what's so interesting about humans and the way that we think and make decisions is we can't see ourselves in it exactly. Oftentimes, it's hard for us to get over the hump. So one of the things that we do, honestly, and we've worked with some really fantastic producers at Makedata.ai is create synthetic data so that we can actually create architectures and infrastructure that look like what that customer is used to seeing, and so that we can demonstrate the value. I mean, you have to remember, we're completely changing the frameworks and mental models around how one manages data life cycles across sophisticated and advanced technology in the cloud. We're really changing the game. I like to say we're not only moving the needle on the spectrum.I'm trying to move the spectrum itself in terms of what our expectations are. And so we will create synthetic data. We will often actually then create immersive demonstrations where we're showing what is happening with that data across these life cycles. And then we will show the actual trade offs in the product set, in the outputs, in the quality, in the capabilities to have flexibility and built for change environments. And one of the things that's usually helpful is for people to recognize that you don't know also what you want to know six months from now, or nine months from now or 12 months from now, and that's a good thing. You want the data to lead you. You want your experiences and shared understanding, and depth of understanding as it increases to direct where you go next and how you identify gaps and how you undo your assumptions. And so that means that fundamentally, you have to be managing your data in a way that allows for those evolutions, for continuous quality improvement. It is an ethos as much as anything. And so we really try to show that and show the trade offs and confront the trade offs. And one of the things that I pride myself in, and that my team is really diligent about doing, is brutal honesty, right? I will definitely tell a customer when I think they're about to make a decision that will be hard for them to reverse or that might paint them into a corner. And we don't just say that. We usually try to show that we are okay supporting sometimes customers when they want to do that, but I want them doing it, eyes wide open, I will always demonstrate their best interest when I do those things, and part of that is also making sure they understand when other people aren't doing that for them, right? So you, you know, you paint a little bit of the contrast in the activity. An example I often like to give is that I'm not impressed anymore, andhave not been for some time by someone who can say, Oh, I've consolidated 100 million banking records, right? I like to remind people that's an incredibly stable and mature data set in an industry that has known for a very long time exactlywhat it is that they are supposed to do, right? Whereas, when you're working with an entity and you're trying to combineeven 1000 child welfare and court and housing and public safety and school records. Boy, howdy, right? You better roll up your sleeves, because we're about to do something really hard in a very complex way. You know that is going to deliver us insights and findings that we probably didn't expect and lead to the next best evolution of what we should actually doing, either as we capture that data, or as we structure that data, or as we push that data out to consumers in meaningful ways. And that's a good thing, right? I like to remind people that those evolutions are important. We will do something different nine months from now than we're doing it today, and that is a feature, not a bug, right? Is a good thing, not a thing to be scared of, right? But that's a different mindset that you have to somehow get very comfortable with. And so we are trying to step into that with that customer, to hold their hand, to do that, to de risk it, to commit, to put our skin in the game, you know, just the same as they are, but also to make it okay, to turn it off. This is one of the really nice things about cloud and cloud evolution, right? Is that no longer are you buying a room full of servers and then hoping you built it right? You know, you can try something for 10 minutes literally decide it's not working and just shut it off. And you know, there's no harm, no foul. And so it really is, you know, we're in a different era as we think about what risk looks like and how we mitigate those risks in this space, and how we demonstrate value and value add, not only through the shifts in the technology, but the mental models and workflows that we apply to those to make them more effective and safe and flexible and like I say, built for change.

Dave Karlsgodt 25:55

Yeah, all right, well, let me slow, slow you down slightly. I spent a good decade plus hanging out with, like, computer scientists and thinking about data structures and doing all this stuff. So I totally with you, and I'm like, You're speaking my love language here, but I am also doing this podcast for people who largely are environmental science majors that all of a sudden became in charge of, like, the greenhouse gas footprint of an organization which is very much a data problem, not a biology problem. And even though, I'm sure most biology problems are now data problems too, but that's a different conversation. But I feel, I think coming back to what you had said before, it's like getting people not to be scared, to try stuff that the creative process that happens before you really dig in and and make the conclusions. Let's pick apart a couple of those things, because one of the things you said was banking systems not impressive to aggregate, because it's like it's been done 1000 times. It's really stable. I think that's pretty widely true across all organizations. I mean, I can usually get how much money was spent on electricity, getting the kilowatt hours that were purchased, that's a lot harder. Getting more detailed data about the meter at, like the other increment levels you would need for an engineer to really get excited about it, that's way harder, and then knowing what to do with that information is yet more difficult. So I mean, there's like, kind of layers there, but that person who is the environmental science major, who's now in charge of the greenhouse gas footprint has to sort of navigate all of that. Let's maybe take more of a deterministic, simple case, like, how would you approach that? More the banking structure, and then maybe we can go to something that's more like a social science kind of case management record and like, what's the difference between those? Can you parse those out? For me, is that the right question?

Natasha Nicolai 27:29

Sure, that's, I think that's a fine question. And actually, this might help your students to start to imagine the world that we're entering with the technology as it's evolving today. So this is not uncommon for me to explain in a room. So one thing I want to say very quickly that you mentioned earlier, you said the word magic, and I will say that one of my pet peeves during the pandemic is that I earned this nickname, 'The Magician' because I was presenting data and data models to people that they never thought were possible. And I hated that because in reality, right, what it really was was blood, sweat and tears and seven people and myself spending 14 hour days, you know, just like pulling our hair out, making it work. And my preferred nickname along that trajectory was 'The Professor'. And so this is where I spend a lot of my time. Is just around this education. And so, for example, to your point and your question, there are data structures that are very you use the word stable, I will say routine or mature, where the nature of how we collect and look at and explore and structure, literally, physically structure, that data hasn't changed much. You have known entities, known quantities, known requirements, a lot of regulation and policy that's very structured and mature, and so that data itself hasn't evolved tremendously over time. It's to use your word stable. It may look the same exact structure may have existed 60 years ago, as it exists six days ago. What that means is that from a computer standpoint, from a technology standpoint, from a physical structure standpoint, consolidating that is quite simple, because you have a linear mechanism to pull through, and all you're really doing is either aggregating or adding, for example, rows or columns or layers to that data, and you don't have to worry about losing values in those consolidations, or the nature of that data changing from an interpretation perspective, how I interpret a particular value today is probably the same way it was interpreted 20 years ago. Is probably the same way it was interpreted 40 years ago, and so that kind of work makes it very easy for both AI and ML processes to ingest and convert, but also for physical data structures to be remade from a database itself to the cloud, for example. So the ways that we used to model data physically is. Were pretty limited by the ways that the technology allowed for us to structure rows and columns and layers at the time, often relationally right, often things drawn back to known points of context. However, you know, in the policy landscape to change, you know, to child welfare is a great example. A lot of that data changes every six months, because the policy changes every six months, or that require you to collect a different piece of information. Or this case manager and that case manager you know are trained in different ways and have matured differently and comment differently on what they're doing, and so you end up courts records. Holy Moly, I've just gotten more recently, deeper into court systems for a couple of big state projects, and those are they're crazy. They're structured to literally change over time as the case changes. Rather than appending a case, they may create a new case or data is only accurate for a specific, singular moment in time on purpose, and so if you're querying that data, you can't consolidate it down to an individual or to a case, because both of those things could have changed literally by the hour or across different days. And so the querying and the accuracy and the maturity of that data is only specific and relevant to a moment in time, literally, which makes it very difficult to collapse or to trend, because that data needs to be interpreted for specific moments in time, and those are more common issues that I'm tackling right across these policy landscapes, not only are the federal requirements changing on a regular basis, the state's interpretations of those federal requirements might be changing on a regular basis, and that state legislature might be changing what they want to do with that on a regular basis, if not at least annually. You know, at some frequency that looks like that. And so that means that something simple like SOGI, the way that we think about sexual orientation and gender identity as evolved as a social contract, even right, on a nearly annual basis for the last 15 years, which makes that specific data field something that then has to be interpreted over the provenance of that data over time, for each moment in time to be consolidated to what we mean when we say that right now today, knowing that what we mean when we say that a year from now could be different. And so exactly these are the realities that we exist in as we try to figure out how to consolidate and trend and make meaning out of that data, you made some really great points around looking historically. We do a lot of that to derive value and insight, but to do that, given what I just said, it means that you have to have quite a bit of specific or historically, you had to have quite a bit of specific knowledge and documentation and data dictionaries and schema definitions and table definitions, which (1) often didn't exist (2) even if they did exist at a level of complexity, to then consolidating that information into a single view today, given that the way that we think about that has evolved now, those are things that I've gotten very good at doing, but there's certainly no magic there, right? There is still just the hard work of understanding and digging in and talking to experts and understanding that legacy evolution and refactoring it into today's modern technology in a way that allows for that evolution to continue again, risk reduction, but also add value to all of the work in doing that.

Dave Karlsgodt 33:22

Well, one thought that comes to mind then is, it's basically, what you're saying is data in this context is really more like the difference between art and science. I mean, there's, there's a lot of like interpretation that is in the beholder, kind of mixing metaphors here, but it's like, is a banana taped to a wall art? Or does this data mean what it did yesterday? I mean that sort of like pulls the rug out from our foundation to do anything so that freaks people out, I imagine in your work. But how do you deal with the emotional response of people when you start talking that way?

Natasha Nicolai 33:55

There are a few ways that we think about this. One is that that's usually knowable. And two, something I imagine that we will veer into here eventually, is that basic, hardcore, mathematically relevant, statistically accurate machine learning has gotten very, very good, and so our ability to look back at those kinds of data structures and at least at a cursory level, interpret them very, very quickly and go through things like data discovery or data cataloging or assigning data dictionaries back to data that never that, has gotten very, very good. And so while we may breach the subject at some point of you know, today's hype cycle around Gen AI and robots and whatever else, the reality is, the vast majority of what I'm using today when it comes to machine learning, and AI is your old school, tried and true, 70 years old, machine learning for discovery and for understanding those data and those data schema, and then taking that information to your point around baselining as the starting point, rather than having to go back and actually physically look through thousands of tables or data elements, which has been tremendously helpful. That's one of the evolutions that has completely changed the game in the last three or four years that we seem to have just kind of missed, like it was a blip on the radar. We skipped straight from can we update SQL into the cloud to Gen AI? And there was a whole bunch of stuff that happened in between that was way more valuable, in my opinion, to the world that I deal with, which is trying to look back at, you know, sometimes we're still dealing with systems that were designed and built 25 and 30 years ago, and so, you know, our ability to interpret that and leverage machine learning to accelerate those processes has been super helpful. That said, the art piece of that does mean that we usually still need data contributors and data owners and people who understand the nuance and context of that data in the room having those conversations, which is yet another evolution that I'm really proud of helping drive in the industry, and that is, we don't just talk to CIOs anymore, right? This is not technology for technology's sake. These evolutions are really allowing for a kind of conversation and a collaboration across data owners and IT builders and policy makers to be in the same room, having the same conversation at the same time, not only to extract that value and understanding, but to understand, then what the next best investment really is in infrastructure. What are we trying to do? In 100% of my engagements with public sector, we work backwards from meaningful policy outcomes. If I were engaging with your students and their work, we would be looking at starting with the big long term goal, which is reduce carbon emissions, or it is to decrease costs and expenditure, or it is to create a better mix of resources that we are allocating across our options for electricity production, right? And we would work backwards from that, and very quickly, you find you don't actually have to boil the ocean. Very quickly, you find that you can demonstrate real value getting to a very unique and specific subset of the data. We might be working with systems that have literally thousands of data elements, and I careabout 50, right? I'm looking for the first subset that we can all wrap our heads around and drive value with, to get people excited to demonstrate it doesn't have to be risky to go through that trial and error process in a way where we're eating that elephant one bite at a time, and that's generally how we find success. You know, to answer your question and reduce the risk. I'm not trying to obfuscate any of that away from the customer. I want them going on that journey with me, because I want them building muscle memory for how to keep doing that after I leave.

Dave Karlsgodt 37:42

Right, right? Yeah. So it really isn't the technology helping the people. It's really the people learning how to use the technology. I'm not sure if there's a distinction without a difference there, but like, it's really the people focused and the outcome focused. And I guess we'll come back to this one. I think the reason we got introduced to the first place is I asked you a question, and I said something like, how do we know that the robots aren't just going to take over the world? And take over the world and we're all doomed? You know, that was some version of that. So we'll come back to that. But before I do, I want to back up a second, maybe back to your experiences working with the state, or maybe some of your experiences currently. The first project I ever did, when I was doing software development, was for King County Housing Authority. So this is large public housing authority, multi billion dollar budget outside of Seattle. This was roughly 2002. So I helped them build their first website. They had no website at all, or they had a website. They didn't control it because it was run by some third party, so they didn't even own it, you know, I mean, it was exactly right. So some of the things I helped them do was, like, take ownership of their domain so they could have their own website. And part of the reason I do what I do now was I got to talk to every individual department head in leading up to that project, because we were building a website, and their part was represented. So I had to know what they did. And, you know, find them a page.

Natasha Nicolai 38:57

See now you're speaking my love language.

Dave Karlsgodt 38:59

Yeah. No. I mean, it was great, and it's kind of not too dissimilar to what I ended up doing today. You know, you had said, go fast and break things, which is like the Mark Zuckerberg famous line, which is the ethos of Silicon Valley, is like, you know, try stuff and fix it, and you're not embarrassed by your first release. Then you went too slow. And, you know, kind of that type of thing, which is the opposite when you're in a risk adverse public entity that are highly political. So it sounds like you've had some success in getting people to work that way without the bad part of that where you have lawsuits and HR teams have lots of work to do if you do that the wrong way. So like, how have you approached that in state entities to really bring that creative thinking that needs to occur in there? Just talk to me about that.

Natasha Nicolai 39:40

Yeah, so change management ends up being a huge piece of that, but I will say I've had some really fantastic mentors who have moved the needle from the Center on Budget and Policy Priorities, from Mathematica, even some work I've done with Code for America in the past, over time where everyone was driving toward this ethos of continuous quality improvement. And how do you do that? And one of the things we developed. In Mathematica was LI squared: Learn, Innovate, Improve, right? This idea was your road testing things, and you do have to iterate very quickly, and so it means that you are doing as I described before, eating the elephant one bite at a time, right? You have to pick small things that you can put guardrails around and control in order to do that experimentation. But it also has to be something that's still very meaningful. And so to one of the points that you just made, that usually means getting everyone on the same page. There's no world in which you get to do that kind of thing if everyone who's going to be involved or touched isn't on board. No one wants to be surprised, right? And so that's one of the things that you have to do, is you get people involved, you think big, and then you start small. But that think big can't just be some executive that think big can't just be some policy maker that think big really has to include, like I said before, this collaboration that today's technology and sophistication begs for, and that is you have to have, you know, data and IT and policy and program all in that conversation together. But the other ways that I think about that sort of Silicon Valley mental model is also measure what matters. Solve for actual problems people are having, right? Prioritize, event driven, in my case, architectures. But let there be a rhyme and a reason. Let it be rational. I do a lot of logic modeling, you know, with my customers, where we're working backwards, like I said before, from those long term goals. And I'm absolutely forcing the function of saying, If you can't draw a straight line from this investment, we're about to make to that outcome, it's the wrong investment. And so this is not just about performance or scalability, all of those things are nice to haves for sure, but you need to be making things that you're proud of and prioritizing, you know, solving these actual problems and delivering value. My team does not get involved in a project if there isn't an actual policy outcome or research question or insight that we know, everybody at that table is really excited about finding out the answer to because we know, from a change management perspective, that is what it's going to take, and that we can then successfully iterate to get there. Now we have yet to fail up front with a deliverable, but it's because we're doing what you described before. We're putting in the work. We're usually working with someone for months, which, by the way, is still better than the years that it usually takes, but for months to define those architectures and those data sets and get down to the data element and to build that out. And like I said, we know before we deliver it right that it is going to work. And we are transparent with the customer along the way around pitfalls and things that we discover or that we may eventually scale it until it breaks, but they also know at that point that we've got skin in the game and where they're holding their hand, and that we will help overcome whatever that thing is. And one of the things that I love about the public sector space is that the peer impact is high, and so we do see states and other entities that we work with willingly sharing their findings and experiences and architectures and working models with each other. And I love that. I think that is an ethos that is sometimes unique to the public sector. You know, you have to remember that these individuals aren't getting up and going to work every day because they're getting big paychecks, right? They're doing it because they care. And so when you can deliver value about things that they care about, that get them out of bed every day, it keeps them pretty motivated, and even if you have a misstep here or there, gotta tell you, people just work way harder when you're actually changing and moving the needle, even if it's hard at that, right?

Dave Karlsgodt 43:35

No, definitely. And I think we see that in my space of higher education, climate action planning is a pretty narrow, like, part of what I do, and that there's lots and lots of collaboration. I guess two questions there, there's the sharing of information from like, one entity to another. I guess just simple question would be, my expectation would be, if you've come across something that really works in one state, then the other states are gonna be like, hey, how'd you do that? Well, that's probably great for you and your team getting additional business. But I guess the other question I'd have would be, can you break down what that looks like to start from the outcome and work backwards? Like, give me an example, like, take an outcome and then like, what would some of the preceding steps that you would work through? What would the operation look like?

Natasha Nicolai 44:14

Sure, we'll go back to one I started with here, and this is also public information that you can still go today to the California CDSS, Cal or website and find yourself, and that is, for example, when I was working on these TANF policies, the state's assumption and the legislators motivation and the directors goals were really to create environments where families could be self sufficient. That was the long term goal.

Dave Karlsgodt 44:41

That's the outcome they were trying to achieve.

Natasha Nicolai 44:43

Exactly now, when you work backwards in all the ways that I was saying, from what self sufficiency means, from, say, a medium term outcome, right? It means that you have to have resiliency and the ability to overcome hardship. It means that you have to have skill sets like executive function and self regulation, to stay in hard situations or to go find the next job quickly. If this one falls through. To you know, have the ability to both make sure that you have gas in the car so you can go to your appointment and make sure that your kid gets dropped off at child care, and that it's a good child care provider. That's a lot to be sort of registering in your mind. And in order to do those things, you have to remember that aspects of what we had learned from the Harvard Center on the Developing Child is that those are skill sets that are developed through observation and perspective gaining oftentimes. And so you need to be creating an environment in which people can actually learn and observe, right? Which means you need to be in a situation where case managers are relational and not transactional. And that means that the kinds of activities that we need to be doing, I'm just walking you backwards through a logic model here. That means that the kinds of outputs that we care about are skill development and goal setting and practice and fail, but then reflect on that failure and go back and update your goal, which means that the activities that we care about are fundamentally different than, did this person show up? This is not a check box, 01, exercise. This then means that I need people who are invested in and capable of having a relationship with a in that case, you know, often a young mother who's got a kid and a half, you know, sitting in the waiting room with her, to want to come back, to keep interacting, to build those skills, to admit when they didn't meet their goal, because that's just as important to learn from is when they did meet their goal, which meant that we were tracking all the wrong data elements. It meant that we were incentivizing caseworkers to do different things than what we actually thought we wanted in that long term outcome of self sufficiency, right? And so that is a good way to work backwards. And so that was actually how we motivated the entire state. And then eventually the feds picked it up to change the nature of how we thought about measuring what matters. To go back to your Silicon Valley question, right? Yeah, we could then obviate that what we actually really cared about was investing in things like housing stability and full time childcare and transportation vouchers. And not just work programs that built hard skills, but work programs that built soft skills and this ability to take more than 10 minutes on the phone in an engagement and actually sit down and go through a goal setting activity, right? And then follow up with that person you know, talking to those participants in that program as we did the research and follow up. And UC Berkeley eventually did a study on our work to see if it was helping. Not only did we have better work participation rates, right, but we had more stable families, and we had people reacting to that government program in really different and positive ways. I felt like my case worker cared about me. They became my accountability buddy. I mean, things that were just completely changing the nature of how we delivered services and ultimately save money right over time, you know, because we were doing better with resource allocation, and that all took a lot of data. That all took drafting out that model working backwards to data elements that actually were going to drive those decisions and behaviors and incentivize and reward workers and families and counties for doing things differently that we could then roll up into that data narrative that painted the picture, that motivated the continued action and changes in the technology systems, and changes in the way that the state then thought about things like child care and housing and mental health, right?

Dave Karlsgodt 48:37

All right, well, let's try an exercise here, and I'm just making this up right now. This was not predetermined to be clear, one of the things I'm working on right now is the broad problem of scope three emissions. And if you don't know what scope three emissions are, it's basically like all the emissions that come from things that are related to an entity. You know, you take a university campus, people driving to and from their campus, the stuff they buy, and the emissions generated by the stuff they buy. But it's not like the stuff that gets burned in their boilers or the electricity come into their campus. It's the it's sort of like the indirect stuff. So it's, by its very nature, something that a university doesn't actually control. But it's basically all of climate change problem. It's like the societal impact of all the net of all that we do. So in some ways it matters more than anything. In some ways, there's very little you can do about it. How would I approach that type of problem? Help tease that out. For me, the outcome, ultimately is zero emissions from society, which requires a complete transformation of all systems. So it's kind of at the scale of solving world hunger and poverty, right? I mean, it's a similar kind of thing, but I think you've broken it down, like, Where would I start?

Natasha Nicolai 49:39

Sure? So I think you've already hit some highlights along the way there. So oftentimes we think about a long term goal. We're thinking about one or two major mission critical concepts, and then we're breaking that down into some medium term outcomes. So you already did the first step. You said, okay, our goal is this, you know, net zero emissions, or cradle to grave kind of thinking, or whatever you want to call it around sort of someone's complete footprint, whether that's at school or at home or anywhere else, all rolling up. And then you think about, well, okay, medium term, what are the component parts that would tell me, from a metric perspective, that I'm achieving that? Well, I would need to see reduction in all of the other things that you mentioned. It's not only a decrease in consumption, perhaps, but a change in the nature of that consumption, or it's a shift in behavior around peak electricity, and it might be a change in the kind of gasoline or buying an electric car, or what the temperature is that you keep your house at right? All of those things, then are going to be these sort of medium term outcomes, or these short term outcomes, that, when you start to aggregate them, eventually paint a picture that says, okay, yes, now I'm getting to this long term outcome, and this is where we start to obviate, okay. So then, what are the kinds of data? What are the kinds of outputs that you would need to know to confirm with some level of confidence that you are impacting those short term outcomes? And so oftentimes those are going to be behavioral, those are going to be observational, and those are going to then beg you to get back to activities right. So now we're back to kind of that case management piece that say, well, was I impacting someone's understanding of that scenario? Was I giving them tools and toys to actually improve their understanding? Were we able to make it clear to them the trade offs at any one decision point that they were making, and then impact the decision that they made. Ultimately, in the end, could we create environments or nudges or opportunities for people to make better decisions, right? So now we're starting to unpack some of the assumptions. We might just say, Oh, if we just educate people, then they will. But we all know that for a fact, that's not true. We also have to decrease the friction in doing that. We have to incentivize or motivate that kind of behavior. We have to recognize and reward that kind of behavior. This is just how humans work. You know, behavioral economics are so important to the way that we think and understand this. And so as you work backwards, right? You can see very quickly how you can get to activities and outputs that you need to be able to understand. In particular, you need to be able to understand whether you're impacting them, if what you think your goal as as an institution, is to drive toward these long term behaviors. One of the ways I often say that is you want to be both outcome based and process informed. It's a very sad day when you see outcomes that you love, and then you look back at the process and you have no idea how you got there. It's equally sad when you look at those long term outcomes and they're shit, and you're going, Oh, God, what did we do wrong? And you look back at your process and you I have no idea, right? And so, you know, these are the kinds of things that we used to do in workforce programming. Like I said, one of the things we cared a lot about, and that it turned out, EdDs and employers cared a lot about, was soft skills. Well, how was I going to be able to look back at a set of activities, a bunch of workforce programming might that we might put somebody into and know the difference between them? Well, one of the ways I could do that, then, as an activity, as an output, was to track soft skill development, rather than just, did they get this certificate check, right? Did they pass their forklift practice exam check? No, I cared more about did they do that in a polite way? Did they do that wearing their uniform? Did they show up on time? Right? It's the same kind of thing, right? I mean, this is where we get into other things that we love about technology, like AB testing and the ability to track activities and on a university campus, right? You know, you could run all kinds of that, right? Oh, in this dorm, we put up this kind of messaging. In this dorm, we gave students control of their thermostats and tied them to an ecosystem. In this place, we gave people $50 rewards for reducing their consumption by 20%, right? And you could test all three of those and see which one of those things actually led to an output without having to assume or hope. You can actually explicitly look at those kinds of things, and you should be, if that's what you're trying to drive right. You have to be able to unpack that assumption and again, draw that straight line from here's my input in my investment, through to here's the thing that the person did, here's an activity, here's the output I tracked to know that that process did or didn't work. And then here's how that straight line went back through to my short and medium and long term goals and outcomes.

Dave Karlsgodt 54:16

Excellent. I have all sorts of new ideas on how to take on scope three missions, specifically, like transportation programs, for example, or, you know, things related to food systems and choices and, yeah, that was great, all right. Well, I want to switch, let's go back to the robots, because I think I want to just take that one on, head on. We have all lived through the last decade or so where we've experienced, initially, our browsers being overtaken by just bad banner ads, and then eventually our social media being taken over by AI bots or Russian rogue agents or whatever. So there definitely is no lack of things that can cause fear that we're doomed and it's all just going to go into some apocalyptic scenario. Again. The reason you're here is because you were one of the first people in a long time to give me a vision that was like the nexus of this can really help public agencies. This can really help society broadly. So I'd like to cure your kind of reaction to convince me we're not doomed and that the robots aren't just going to take over everything.

Natasha Nicolai 55:13

Sure. Well, one, I you know, never say never. I can't promise anything. Two, I don't think that's happening anytime soon, right? So, you know, first what I would say is that you've got a lot of media consumption and hype cycles today that are still reflecting what is ultimately a tiny fraction of what's actually occurring on the ground in government, and, believe it or not, actually, you know, in private sector as well. I mean, yes, some of these things really are happening in tiny slices. But remember that these companies today are incentivized for the purposes of investment and valuation and staying on top of the pile, perhaps to over emphasize some of the they're there right now, and that honestly, the vast majority of these things that have improved upon, like I was saying before, the corpus of data that we train machine learning on to improve our capability to do data discovery, where otherwise it was just a black box. Are tremendous and incredibly helpful. And the value add in terms of the human friction to reduce the need to have those experts in the room for hours and hours on end to understand what's in these systems today and get that unlocked. Are tremendously valuable. That is very, very helpful. When I took over as the chief data officer, I ended up with a pile of 450 data assets on my lap that I was then supposed to magically govern, and seven of them had a data dictionary, and three of them had assigned data owners. Literally nobody knew what was in 447 of these data assets, and it was my job to figure it out. And at the time, five, six years ago, that was quite the daunting task. That would have been a multi million dollar effort that likely took over a year and a half, that I would have had to pay some third party to come in and do for me today, we can do that in a matter of hours, to the tune of 10s or hundreds of dollars, with incredible accuracy. That is game changing, right? Sure, that also means that people have to understand how to use that and what it means and what we're doing with it. And of course, we're still doing that with the people. I do think there's this really interesting, especially in the government, and I would say even in private commercial sector, right? How many times have you just started yelling at the agent chat bot on the other side of your phone call to the airline or the hotel or your credit card or whatever you're like I am saying to you. You know that you really it's it's good, but it's not that good. Humans still want human interactions to go you know, people still want relationships, and not just transactions. You know, when transactions are not going well through automated processes, people let you know, right? And that's not a very satisfying experience. And while a lot of those things have improved, a lot more of what's really happening in that space, I've helped a couple of companies recently develop some really robust, really meaningful agentic processes that do use pretty robust and high end AI and generative AI. Those are things where humans are now working with it, within those agentic processes to again decrease friction around specialized areas that still require human touch, that still require sentience, that still require creative thinking. Remember that none of these AI models today are sentient. None of them are doing actually creative things, they are all producing some regurgitation of something another human did before. We don't see that changing anytime in the near future. I mean, maybe, you know, eight or 10 years from now. And so I do think we're still caught in a little bit of a hype cycle and an over indexing across an assumption that that's going to happen tomorrow, when it's definitely not what we are seeing today, when these use cases are effective and actually scaling is that we are augmenting someone's ability to do more meaningful work, right, not replacing them in that work. We are trying to get rid of things that are undifferentiated heavy lifting. We are trying to reduce the friction in ways like I was saying before, around data discovery or data governance. You know, even improvements around things like security and security perimeter maintenance, those are really helpful places where having an AI process that can regularly look for anomalies is incredibly important and valuable. And that was something a human wasn't doing before anyway, or if they were doing it, they weren't doing it very well, right? And so now you can pair that, AI, that technology, with a human, and everyone is genuinely much better off. And so not only do I not think robots are taking over tomorrow, but there are a lot of places still where we haven't even take advantage of what is happening today. You know, I'm working with a really major city right now on a combined utility pricing and public utility infrastructure investment modeling exercise, using data when we are pulling in Internet of Things technology that say Amazon warehouses have been using for a decade or more and putting them in dams and in locks and in things that we've been doing in maintenance and operations around distribution facilities forever, and they've never been adopted or integrated into public utilities or dams or locks.

Dave Karlsgodt 1:00:09

Or if they are, they were probably like a little clipboard with a guy, and a checkbox...

Natasha Nicolai 1:00:12

Yeah, exactly! And so now we have this ability to take those component parts and put them in public sector investments. Those people still have their job. It's just that now, instead of running around and constantly putting out fires and going to the next transformer that blew up or going to the next key in a lock that wasn't working anymore, they can triage. They can prioritize. You can do preventive maintenance. You can start to imagine utility pricing models that include the fact that every eight months, we're going to need to turn X, Y and Z thing off for three weeks and do maintenance so that instead of being caught and surprised and having to jack up the price of electricity because suddenly this dam isn't working, and we didn't know that was coming, right, we can plan for it. We can consumption smooth. We can improve our investments and resources and allocations, and so just remember right that I have yet to work with a major government entity that's already doing that. We are doing that right now in a major in a major city, and I'm incredibly excited to demonstrate the value in that, so that other cities can adopt those same kind of mechanisms and behaviors. But it's a long time before you know every city or every state or every municipality has made it through that, and I think that's okay, and I still think it's very exciting to see those transitions happening, and I think it's important that we get a little bit more honest, maybe, about what it is and what it isn't that's happening today, so that we can get past the fear and the hype and just get to the value and the reality and the training and making sure that young people today understand how those technologies work, so that they can augment their future expectations and job perspectives and value in what they're going to do. It's going to keep happening, that's for sure, but I don't think it's replacing people anytime soon.

Dave Karlsgodt 1:00:14

No, that's I appreciate that response. I mean, I think back to my own beginnings, when I was first coming out of college, and, you know, moved from being a musician to a software developer, and again, kind of achieved that magician status, which was, like, people perceived me as somebody that was doing magic, but I did a lot of work to learn all those things, like, like you said, and what I was able to do that they couldn't do, just because I had awareness of what the tools were. And I think that's really what I hear you saying, is that the tools we have today are just so much better, and we just are barely able to learn how to even use them. And it's not they themselves. Don't use themselves. We're a ways off from sentience, which, that's what that would imply, right? That's great.

Natasha Nicolai 1:02:35

We need to stop being hammers, running around, looking for nails, and appreciate that the toolbox has grown substantially, and that's exciting.

Dave Karlsgodt 1:02:43

No, very good. I think we can. We can leave the robots behind then. So I guess where I want to end this conversation is trying to come back, partly to appease my marketing manager, to make sure that this is relevant to energy sustainability. I think we've touched on a bunch of stuff that already is so I think I'm safe as it is, but the physical implications of AI is another, well, I would say real thing. I'm involved in projects right now where we're talking about expanding the capacity of the grid to accommodate data center construction. The question is whether it's something that's going to be at the scale that dwarfs all other use of electricity, or it's just one more way that we use electricity in general, because we're going to use so much AI and all this stuff that the robots are going to take over the world. They're also going to suck up all our electricity. I'm trying to find a way to inspire people that are interested in sustainability and driving sustainability outcomes to really lean into what you're saying, without this being the reason that they don't embrace it.

Natasha Nicolai 1:03:39

I'll make a couple of statements, and then you can tell me if they're if they're helpful or effective in terms of the audience and your goal here. And the first is that utility modeling around data center consumption is a massive improvement over the legacy investment that literally millions of entities are making today around maintaining and housing their own data centers. So there is a lot of economies of scale and improvement in efficiency and consumption smoothing around data centers and racks. If you could move everyone into, say, a cloud environment, because the natural ability then to flex that use across multiple users using, say, one rack, right is massive. You can imagine right away that if we could actually shut down every data center running in a million different state buildings all across the country right now and have that running in four really large data centers that are consumption smoothing, that utilization, morning, noon and night, that in and of itself would be a worthwhile investment in cloud and these consolidations of data centers. The reality is, a lot of that infrastructure and technology today in those legacy environments, in those physical spaces and basements and buildings, right are running 24/7 today already as it is and they don't need to be. It will take a moment. I think, for this transition to fully occur and be realized. But there is an efficiency for sure, in the consolidation and the flexibility, in the utilization and consumption smoothing of those models over time, one two, of course, all of the things that we've talked about today hopefully obviate the fact that we are pretty inefficient today at how we allocate resources in a lot of our public sector investments, and we should care about that. And to go back to this example of the major city where I'm working on both their utility pricing for each next kilowatt hour and their utility infrastructure maintenance and operations the right now, the way that those decisions are being made sometimes feels like the literal wag of a finger, you know? I'm talking to some of these people who are making these decisions right now, and they're telling me they're looking at an actual physical wall that's streaming seven or eight third party data sets. And they're literally like, well, it looks like it's windy here, and it's raining there, and the water is flowing, you know, so we're gonna call it 40. That's pretty scary to some extent, and that's happening all over the place.

Dave Karlsgodt 1:04:17

That's kind of amazing that the society has functioned. The more you dig into that, you realize how little, yeah,

Natasha Nicolai 1:06:13

I say on a pretty regular basis, that there are days when I wish I didn't know how the sausage was made, because it does lead me to state things like you just did. It's amazing that we're all still sitting here, you know, and that anyone's getting, you know, safety net benefits, or that my lights are still turned on, and so there's a tremendous amount of efficiency to be gained in taking advantage of those data centers and those things that should hopefully negate some of the cost of doing that. And the last thing I will say is that the place I work is not different than the two or three other most major cloud providers, I would say, globally, and that is that they are thinking very hard about this sustainability is paramount for them and their largest consumers as well. And so they are thinking about the physical location of these investments to minimize impact and maximize the ability to use green energy and to otherwise take advantage of thinking critically about the not only the geospatial positioning of these things, but the consumption and utilization of them to maximize efficiency. And that's some really smart people using some really high end technology to make The Next Best decision about the next investment in technology, which is good. I think that should give us some peace of mind at least about those things, but there's a lot left to be done at this point in terms of better allocation of public sector resources, and that's what gives me hope, I guess, in this particular conversation. But I've listened to some of you know your prior podcast conversations, and I think there are a lot of really smart people who, if they could take advantage of these evolutions in the technology and the data I'm describing today could be even more impactful than they are today, and that, in and of itself, to me, warrants those investments and makes them worthwhile, and should make us all a little bit more excited than than scared.

Dave Karlsgodt 1:07:53

I appreciate that. Let me summarize, just to see if I got that right, there are real resource needs for doing AI and compute and all of the technology that we've talked about, but the amount of benefit that it can bring, if used for things other than invading people's Twitter feeds with Russian bots, obviously, notwithstanding or making Bitcoin or whatever on net, you think it can unlock such efficiencies that you're still net positive on the whole enterprise. Is that fair?

Natasha Nicolai 1:08:21

That's a fair statement. And I hope I'm proven correctly. I hope I continue to have a positive impact. You know, that makes that statement true and worthwhile, and I believe it's possible, yeah

Dave Karlsgodt 1:08:31

that's great, yeah? And I, I mean, same, same here. I guess neither of, none of us know the future, and the robots will tell us otherwise if we get that far. But for now, I guess we can keep the positive work going forward. We talked about all sorts of great stuff today. This was really, really fun. Natasha, and I probably could talk to you for another 10 hours if we had the time. But let's maybe end it here. But anything else you want to share with the listeners, or things that they should be thinking about, resources you'd point them to you haven't already mentioned.

Natasha Nicolai 1:08:57

Sure, I do get asked a lot at conferences when I'm presenting on stage to, you know, to large audiences. Hey, what should I, you know, go home and be telling my kids, you know, who are about to go into college that they should be thinking about, or what should they be studying, or what certification should they be getting to do what you do? And my answer is always roughly the same, and that is, get very comfortable with change, appreciate sort of the through line of some of the conversation that we had today. And thanks again, Dave for having me equally enjoyed the conversation, and that is that we're in a world now where change is constant. I know we hear that kind of thing all the time, but the sooner you can get to the point where you're creating mental models and frameworks and drawing out your own conceptual theories, it has an umbrella under which these things can situate and you can be flexible in your mind's eye about how you are allowing your thinking to evolve with the technology's evolution, with the policies evolution. I mean, think about this like fMRI of 20 years ago that I was describing. You know what you know up until this moment in time. And you need to be okay with the fact that you might not know or understand what's going to happen tomorrow, and that that's okay, and that's a good thing, and rather than being disappointed by that or afraid of that, embrace that be excited about it, learn be curious. Dive deep. Assume you don't always know the answer, and you will learn a whole lot more, right? And so, you know, it's funny, we just came off of the heels of our major technology conference where we announced, I don't know, 20, 30, 50, new services. I honestly don't even know how many were announced this year. And a lot of people ask me if that makes me nervous, if that bums me out, in terms of all this work I've done on all these architectures that work at scale today, you know, is it? What? If it disrupts it, I say, great! I have to be excited, because that is what I'm trying to teach other people and other public sector entities to do, is to be excited about the change, embrace the change, be ready for the change. Assume it's gonna change, right, and you'll be better off. And so I guess that's the one piece of advice I would leave people with, is that don't spend too much of your time or energy becoming an expert in a specific tool, spend your time understanding systems and frameworks and mental models, where these other things can be sort of flexible components that change within it, and you'll be a lot better off. You'll be a lot more successful.

Dave Karlsgodt 1:11:14

I love it. I think that's sound advice for anybody in any field, but particularly for those working in sustainability. That's beautiful. So Natasha, thanks so much for being here.

Natasha Nicolai 1:11:22

I appreciate you. Thanks, Dave,

Dave Karlsgodt 1:11:25

That's it for this episode. Thanks to Claudia Ahawo And Trevor Jolly for their production assistance. Our theme music "Under the Radar" comes courtesy of Gio Washington-Wright and the studio Big Band. You can find us online at Campusenergypodcast.com We're also now on Bluesky at @energypodcast.bsky.social. Follow us on LinkedIn by searching for Campus Energy and Sustainability Podcast. If you enjoyed this show, please tell a friend or drop a rating or review on your favorite podcast platform. And as always, thanks for listening!

<- Back to Episode 45

Transcribed by https://otter.ai