TechTank, a biweekly podcast from the Center for Technology Innovation at Brookings, explores today’s most consequential technology issues. Moderators Nicol Turner Lee and Darrell West speak with experts and policymakers to share data, ideas, and policy solutions that address the challenges of our digital world.
Just as some workers and companies have turned to generative artificial intelligence (AI), so are colleges and universities. Some surveys show 80% to 90% of college students have reported using AI for academics, and several colleges have embraced the technology by integrating its use into curriculum or building customized large language models (LLMs) for the school community.
Creating new policies, training faculty on AI and other technological innovations, and ensuring access for students are an extension of universities’ previous work in digital inclusion and literacy. How universities are leading in all three areas is the subject of the new podcast episode that also shares additional resources to make AI more accessible and available to faculty and students.
In this episode of TechTank podcast, co-host Nicol Turner Lee is joined by Lev Gonick, chief information officer at Arizona State University, to discuss how universities are integrating AI into their institutions and considering lucrative partnerships to ensure that students and faculty benefit from AI technologies.
Listen to the episode and subscribe to the TechTank Podcast on Apple, Spotify, or Acast.
Transcript
[00:00:00] CO-HOST NICOL TURNER LEE: You are listening to Tech Tank, a biweekly podcast from the Brookings Institution exploring the most consequential technology issues of our time, from racial bias and algorithms to the future of work, Tech Tank takes big ideas and makes them accessible.
[00:00:26] Welcome to the Tech Tank Podcast. I am co-host Nicol Turner Lee, the director of the Center for Technology Innovation at the Brookings Institution, founder of the AI Equity Lab and editor in chief of the Tech Tank blog. It’s been three years since the release of ChatGPT and everybody’s trying to figure out what to do. Probably you that’s listening and institutions of higher education are still navigating both the opportunities and challenges of AI for their students, administrators and faculty. What’s interesting, I think, is that over 85% of bachelor’s, master’s, and doctoral students use AI for educational purposes.
[00:01:05] Things like research or homework assistance or statistical analysis or outlining, even though many of them report low AI literacy levels. So that’s interesting, right? They say they’re using it, but maybe not to the maximum ability. And perspectives in academia towards AI vary widely. Professors have taken approaches towards AI in the classroom, with some integrating it into assignments and others outright banning them.
[00:01:32] And skeptics have raised concerns about academic integrity and the erosion of students’ critical thinking skills. Which is why we’re having this conversation today. because I think there needs to be this broader conversation of one, how higher education is instituting this into their, ecosystems, but more importantly, a conversation around why we do need to actually make sure this AI fluency for workforce readiness as well as personalized learning. So there are a lot of colleges and universities developing AI tools and guidelines, but I know one, and one individual in particular that’s doing a heck of a job on it.
[00:02:13] And that’s my friend Lev Gonick. He is the Chief Information Officer at Arizona State University, and he’s here to discuss his recent work designing and managing digital enterprise infrastructure at ASU Lev also served, and this is how I knew him , dating way back, when we were both in the digital divide space, he was a co-founder and CEO of DigitalC, a nonprofit focused on digital opportunity and innovation as well as, in a prior role, he was a Chief Information Officer at Case Western Reserve University. He’s an ultimate great guy. Really clear about how to do this. I’m gonna make sure he tells you a lot, but doesn’t give away the store because he is one to put on your list of people to watch. Welcome Lev. Thanks for joining me, my friend.
[00:03:01] GUEST LEV GONICK: Great to be with you, Nicol. Great to be with you.
[00:03:04] CO-HOST NICOL TURNER LEE: I always appreciate seeing you in person when I come out to ASU and I always appreciate hearing your voice. Thanks for taking the time. I laid out a lot and I probably gave you more compliment than you actually expected to receive, but that’s me and what I love about this conversation we’re gonna have today. You know this landscape better than anybody else from both the enterprise side as well as the community side because as I mentioned, you and I have done a lot of work to surface digital equity, as one of the prime reasons why we both stay so involved within the technology. I laid out what the landscape looks like to most of the world that’s thinking about AI.
[00:03:44] Integration into higher education, but what are you seeing when you think first and foremost about AI within these institutional contexts? So give me the broad scope and then I’m gonna have you brag about some of the stuff you’re doing at ASU.
[00:03:57] GUEST LEV GONICK: Maybe the place for me to start is, where you and I connect. I think we have both been interested in and committed to, digital inclusion and trying to accelerate digital inclusion. And both you and I started, even before there was, broadband access. We were all involved in actually just trying to make sure communities had access to computing. And I was up in Canada teaching at the time in Waterloo and, these newfangled computers in the early nineties, showed up. And, they were these PCs that no one had really seen before. And my first instinct was to build a community resource so that community members, in Waterloo, who had never used a personal computer, would have access to it. And then if we fast forward, into the late nineties. As the internet became something that mostly started in, in the university world, began to, create an itch and a curiosity for peoples to, how they might be able to use it. There again, I found myself in Southern California at that time working on trying to make sure that, the community had access, to, early, internet enabled, computing environments. We actually helped a local, school system in Pomona, in Southern California. Acquire and renovate, a, abandoned shopping center, and turned it into, a education village, which had the first internet enabled, computing environment for folks. And of course that was part of the national program all over the country. You and Chicago and folks in Cleveland, as you mentioned in my, adopted, hometown, lots of folks were working, in that space as well. And if you fast forward, to today, I think we’re in the exact same situation. I think we, we’re [00:06:00] talking about AI today. It is in many ways, my commitments here at ASU are informed by the same set of principles, which is that we need to make sure that all of the tools that allow for participation, in society, in the economy, in, education, in training and workforce, all of those need to be, made available to the largest number of, participants as we can possibly, make happen. And I, think that is, the greatest digital inclusion or digital equity challenge, of the last 50 years is actually the one that we’re leaning into right now, which will be around AI. I think, love to chat with you maybe to underscore just why that is. And here again, I think it’s so important, that we make sure that ASU, which is so committed to the broadest number of students from as diverse backgrounds, all with an interest in and, a commitment to, achieving success. And that’s our mission here at ASU.
[00:07:10] But it’s also not just about the students who come to us on our campuses or online, it’s also our commitment to the community around us. And so for me, that’s the thread that I’ve managed to basically pull for the last, almost now, 30 years.
[00:07:24] CO-HOST NICOL TURNER LEE: I love the way you put it back into our history, right?
[00:07:27] And you’re up my alley because I’ve been trying to tell people, we’ve gotta talk about the history of the internet, the history of, why many of us have been in this space for so long. Before I go into artificial intelligence in particular, like what have you seen in terms of connectivity for students that are at the university?
[00:07:47] Cause I think this is a primer for the conversation we’re gonna have and what you’ve been able to do at ASU to make AI much more ubiquitous than we’ve seen traditional connectivity.
[00:07:58] GUEST LEV GONICK: Yeah, I, I think, and Nicol for me, what I’ve seen in a very affirmative way is as we traverse and try to deal with the global pandemic, back in 2020, we realized here at ASU that having access to technology was a table stakes conversation.
[00:08:20] It was something in the, in the air before that. But obviously, this became, as I say, a kind of table stakes conversation. And the university really did, I, I think, an extraordinary job in making sure that all students, we have the largest number of students from tribal, nations across the land. Many of them, when they went home during COVID. Needed to have computing devices and internet access, and the university leaned in to make that happen. Two thirds of the, of, of the land here, in, Arizona is rural, and we have, huge numbers of students whose homes are in rural Arizona.
[00:09:04] And lo and behold, there was very little internet connectivity available. for a lot of interesting historic reasons here in Arizona, and again, the university stepped in, to provide computing and, and internet service, internet devices where we could and, partnered with others to, to make sure, that happened.
[00:09:24] And, we even supported our international students who, needed to go home, during COVID, or chose to go home during, COVID to provide them with, access as well. So there, the principle of access to education is inextricably linked to access to technology and technology that makes, the access to the education resources that we make available.
[00:09:50] And in that same context, ASU, Accelerated the work that we’re doing in AI because we borrowed from our experience, our faculty adoption, and use of AI. which again, oftentimes if you just hold out the, for a moment, what happened during COVID. I think there’s a general sentiment that faculty are slow to adopt, not at ASU. It’s un unbelievable what’s happening in the AI space. There’s been literally, a call early on for professional development for our faculty. More than 3000 of faculty here at ASU have actually availed themselves of asynchronous, curriculum content, professional development content with no incentives.
[00:10:35] As again, there were no incentives during COVID when we have helped to flip the nation’s largest university to, an online experience. Again, not all of it necessarily, the best, out of the starting debates. But here again, I think our faculty leaned in and have seen this as an opportunity, to, address the mission of, making, AI available, to all and realizing their responsibility, to become literate in that regards.
[00:11:02] But also I think, an incredible amount of in, innovators, DNA here at ASU entrepreneurial, DNA here. And so lots and lots of faculty use. And we opened, again, conversations very early on, literally three years ago as your intro indicated. Conversations that we had with OpenAI, to try to help shape the ways in which, and perhaps answer the question, why should OpenAI?
[00:11:30] Is it an R&D facility at the time? Lean into the education space as, not only a principle that was important to do, but also as a way to actually shape market forces, to engage. And again, use the platform of ASU for as large and diverse, a learning community as we are, as an opportunity to work together and, and in January, of, the year, in the [00:12:00] January, immediately following, the November release of, ChatGPT, we began to actually make, their tools available. And, we were, and are among those universities that make, that, product, which has now got an education wrapper to it, which we helped, to shape as a requirement for the security and the privacy of our students. We have had a chance to make those, actually, those, tools available, through licensing to all of our students here at ASU, all 200,000 of us.
[00:12:32] CO-HOST NICOL TURNER LEE: Wow and that’s what I love about your story because I think you came into this understanding the need to ensure ubiquitous access to technology, but now as we’re flipping into the artificial intelligence age, making sure the same happens. Now before I want, I wanna go into this faculty professional development. I just wrote a paper about this Lev, but before I do that, just give us a taste of what AI, or where AI is showing up on the university campus for students and then faculty and administrators. Would be nice to get some landscaping of what you’ve been able to bring in this area.
[00:13:09] GUEST LEV GONICK: The formula, if you want to call it that at ASU is to try to do three, three things, at once. One is essentially developing these communities of practice, broad communities of practice and some of the professional development work, was early on. These were conversations with faculty first about. Essentially, putting out on the table, aspirations as well as obviously, considerations. there was, has been from the beginning, a very significant commitment to, from our faculty commitment to making sure that there’s an ethical framework for the ethical use. Of these tools at ASU, and when I heard that set of concerns broadly from across the campus from our humanities, social science, professional schools, law journalism, there’s lots of faculty who were part of these early conversations and I said, that’s terrific. I, I’d love to stand up a faculty ethics committee to help guide the work that ASU was going to embark upon. And that’s been a central part of our commitment to dialogue, with the faculty and. The series of questions that came up constant following were like, we also need to, professional development and, some, literacy training about, at the time, a whole new vocabulary of things that were, not easy to, immediately pivot towards and, how to have conversations about, again, early challenges in the, maturity curve of the technology with lots and lots of conversations immediately flipping to conversations around short cutting and hallucinations and other kinds of things that, again, there were lots of opportunities for conversations. And then the professional development has led to literally.
[00:14:55] Dozens and dozens of communities of practice where faculty within the disciplines are having, not only conversations, but they’re, we have developed a series of ASU platform technologies and tools, which we call, ASU, create AI as a creative effort, as a generative effort to actually create, Platforms that support secure, private, garden walls, where we can protect, either the intellectual property of the faculty members or certainly, the, personal and, health related, identities of our students, and, and staff along the way. And that sort of set us on our way.
[00:15:39] So the first leg was really around communities of practice. The second piece was really around trying to figure out ways that we could catalyze experimentation and innovation. And we did this actually with OpenAI. We created an internal grant program where we asked faculty to start with, and then faculty and staff, and then finally faculty, staff, and students to respond to an internal grant program, to actually put out, impact, responses. How, by leveraging these tools, do you think you could have an impact on the teaching and learning the research or the service of the institution to the communities around us? and we thought again, that we would initially catalyze a couple dozen, projects.
[00:16:22] And again, at this point in time as we’re talking, Nicol, we’ve had, we have currently 600, projects in flight. simultaneously, responding to these, grant activities, which has given people a chance to, propose projects that, range from individual solo projects to whole classes, working on, a wide range of issues, research related issues, transformation of l language experiences, persona based, education for healthcare professionals conversations on, ways of which engaging, students in philosophy, the law, transformation of the entire STEM undergraduate education experience. All of these have been essentially generated of the idea of how to catalyze. Interest in and then continued reflection. And then the third piece of it is setting up these, the third leg of the whole project has been to set up a futures environment, a series of sandboxes, where we can support and respond to the needs of the campus community as this very early. Immature technology, that we, has sucked all the air outta the room. In almost every conversation continues to evolve. We wanted to create sandboxes and that at ASU has evolved into, a, significant effort, around this CreateAI platform, which now has literally tens of thousands, of, of faculty and students using the platform to actually [00:18:00] experiment to actually do, to actually experiment with, try new things. There are 4,000, AI experiences that are in a library here at ASU, not developed by central. IT not developed by my group. In fact, not usually developed by anybody in it. These are low code, no code solutions that, subject matter experts.
[00:18:21] And of course students are, leaning into, and basically, are sharing with one another, and, applying, the access to now over 50 of the large language models and vector, databases, vectorized databases of, content, all the curriculum at the university. Lots and lots of research activities. That gives you a sense, Nicol, of the breadth of the work underway here at ASU.
[00:18:48] CO-HOST NICOL TURNER LEE: This is so interesting to me because I think you’ve flipped the script on what most people think about when we start introducing AI into education. A lot of that shifts into this conversation Lev on guidance and documents and you know what the policy is, the protocols, and it sounds like what you’ve tried to do.
[00:19:06] GUEST LEV GONICK: Not here.
[00:19:06] CO-HOST NICOL TURNER LEE: Yeah. I was gonna say, tell me.
[00:19:09] GUEST LEV GONICK: Yeah. We had very little, very little conversation about policy because we felt that we had all the policies we needed.
[00:19:15] We have, an academic integrity policy. there’s a whole bunch of practices that we knew and know are gonna continue to evolve and we use the north star of the existing policy framework, which is, the balancing act between, the key, to the longstanding set of practices as to how knowledge gets created and how we validate and verify knowledge through peer review and how we maintain a commitment to academic integrity and how we correct and self-correct for when there are, challenges in that area. And all of this allows us to apply it to AI without us spending two years in conversation about the strategic plan and about the policy, collection of activities. So we have a significant bias to action here at ASU.
[00:20:07] CO-HOST NICOL TURNER LEE: I see.
[00:20:07] GUEST LEV GONICK: But, it is guided. It’s guided by things like the faculty ethics committee, and we have a faculty research and AI committee where again, it’s not, these are largely around what kinds of. Guardrails, what kinds of considerations, need to be, put on the table our part of the university community?That is to say the central services, support teams across the breadth of the university are instructional designers, our technologists, the people who support, teaching and learning, for, neurodiverse learners here on the campus. Everybody needs to be, have a voice, at the table in helping to shape that work. But we didn’t need to invent a whole bunch of new processes that would have a spinning, and basically. Finding, the challenge of just getting outta the starting gates, which is still significantly a challenge, not only in higher education, but it’s certainly one that where we are, you know.
[00:21:13] In many, ways challenged to unlock value. And I think, we’re very, lucky to be at ASU where there is something in the water here in which we managed to have this, I think, significant bias, to action, and to experimentation and to iteration, to design build. and I think that is not only in this AI moment, ASU has had that in the DNA now of the institution for more than 20 years, and it is what helps to differentiate, ASU from all the other great research universities in the country.
[00:21:50] CO-HOST NICOL TURNER LEE: And that’s so interesting, right? Because again, I’m gonna go back to this guidance piece. I hear a lot of like CTOs like yourself err on the side of guidance because of like the security concerns, right? There’s this whole thing like we can’t bring these technologies to our universities because they’ll have like security breaches, particularly at the enterprise level.
[00:22:08] I mean as a CTO that is trying to reimagine what this looks like. And it sounds like what I love about your. Conversation so far is take agency over where these tools show up. So it shifts the conversation from, no, we can’t have AI because it’s leading to critical skill erosion or it’s not, having young people learn sounds quite opposite at ASU, right?
[00:22:30] That it’s engaging various communities, various disciplines, various hierarchies that come to the table to learn together, but. As a CTO, where do you start injecting? Like some of the concerns about the cybersecurity breaches and privacy concerns?
[00:22:47] GUEST LEV GONICK: Yeah. we take those extraordinarily seriously and we have, a very large, robust cybersecurity. Infrastructure here at the university. And, we have again, a series of principles, related to our AI architecture here, which, your listeners can hear, can view for themselves at ai.asu.edu. And in that environment, we have a series of commitments and principles that then inform our architecture, which then informs our code and our DevOps environment.
[00:23:22] And it starts with two key principles. One is, privacy by design, and the other is security by design. And the entire ASU Create AI platform is, developed to, support, and to deter, the injection of, poisonous, tools, that have already begun to create challenges in a number of other platform technologies that are out there. And [00:24:00] certainly, we have been, and regularly testing. In fact, we have AI in real time testing, against, the, those kinds of insertions and that kind of, what we see out there in the broader environment where, certain, sequence of documentation and or code can be inserted into, and help, and can poison, some of the language model, and, ways in which our databases, get utilized in the real experience that students and faculty have that can create bias, that can create ethical considerations. We also have a, we also coming out of our commitment to privacy and to security. We’ve actually have an ethics and bias engine that allows, for, both real time if it’s enabled by humans, and or, for testing, of, of folks using, and creating solutions. You can literally run your environment or you can auto run your environment through, the ASU, bias engine.
[00:25:12] Again, which is an effort to not only take a look at the fidelity and the, of the, language models, but also to make sure that, again, you have a, security agent working for you to make sure that there has been, no, no successful efforts to insert, malware or otherwise, nefarious code, into, the fairly complex, data, systems that are informing research in our labs, and the like. So I, again, I think here. The piece to understand is that, these are almost never binary issues. The idea that we’re, this is not like it’s secure or it’s not secure. This is whether or not you have a commitment to security, in which case, you want to take ownership agency of your own, organization and bring your, cyber professional colleagues into the conversation as early as possible so as to inform it so that they don’t become the office of, no, it can’t be done or it shall not be done because of security issues. And rather, they become the enablers of how to make the environment safer. there is no way, no how that the, that there is any guarantees. In fact, there’ll never be any guarantees of a hundred percent secure environments for AI because there are none, none of those guarantees anywhere on the network, anywhere, ever these are all ways of essentially protecting the downside risk, that face all large, complex organizations. And again, I think a lot of this has to do with mindset. and in, in our institutional environments. And cybersecurity is not just another one of those important, stakeholder groups at ASU, it’s one of the two principle, factors, again, along with our commitment to, privacy, which we take to be, a central, foundational commitment to everything that we do in the technology space here at ASU.
[00:27:19] CO-HOST NICOL TURNER LEE: And no, and I appreciate that because I think that’s where a lot of, Higher education institutions struggle because they’re looking at the technology and because of it being new even to the CTO in many instances, right? They’re trying to figure out like, how does this align with traditional cybersecurity efforts? But there’s this like other piece led that I love for you to opine on, which is, there will be, okay, I’ll put it in two like cases. There are students that. Will reject AI in many instances. I’m finding that young people have these questions around like compute power and data centers and environment.
[00:27:58] And then there’s the obvious, other case, which is we do need these data centers, compute, power, and environmental, remediation accelerants to be able to generate the type of activity that you’re talking about on your campus. I just wrote a paper about this on data centers. I’m just curious, injecting this into the conversation, how do you deal with student expectations and I guess their AI activism? And then two, like how do you manage the resiliency of the network that you’re creating given these conversations we’re having now about this, extensive, demand of compute.[00:28:36] GUEST LEV GONICK: Yeah, we invite, in fact, I have over the last, several weeks had multiple meetings with, student leadership. here at ASU. we have a council of student presidents. I’ve met, and listened as part of the ongoing commitment to be engaged with and try to understand, learn from, our student leaders, here, and you’re right, there is a continuing, AI, interest as, as well as, an activism that you would expect, from, student leaders along the way. And, those are leading to really important conversations. one of which is, what is the university general view of the, Realities that we are, building hyperscale data centers all over, the country. But that’s not just like out there. Here in Phoenix, we have 43 hyperscale data centers, either lit or under construction that’s speaking to the impact on the local economy and the local groundwater. The local power grid and on and so those are really important conversations to be having. And obviously, like most everything else, these are not binary choices. these are about conversations. And then shifting the question from the, [00:30:00] Broad, high level conversations, of the world out there to really turning it inwardly to ASU and what can we do, in, in that environment. And so I’ve mentioned a couple of times that we have, this fab platform technology, which, we’ve. Designed, built, manage and operate here, which is called, the ASU Create AI platform. And, when students, faculty, or staff, wanna invoke different language models, they can actually, select filter for, the models that have proven to be. The most, environmentally sustainable, typically they’re smaller language models, but they actually get to run and actually have a chance to do comparisons in terms of not only the cost of the model, not only the latency of the model, but actually of, the, the sustainability of the models.
[00:31:00] And, those are then symbolized, in the template infrastructure of the platform with a series of green leafs or a lack of green leafs that are out there. So it is like the whole, effort that, developed during, the time you and I were students, which was, around personal responsibility for recycling. There’s a personal way that you can actually, choose to use, for example, some of the, Nanoscale, and discreet language models, and those are ways in which, you can practice, what you believe to be really important. We have a whole series of, efforts here, which I’ll share, which we call EdgeAI.
[00:31:44] These is all in response to that central question, which is, what is the responsible way? to be thinking about, our relationship to the environment around us. So Edge AI is actually offline AI. and that is, ways in which these are use cases that started many of them with being able to deliver, language, model libraries, to, refugee camps.
[00:32:09] Do folks living on tribal reservations where, there is not, a significant internet, access in, in some cases. We even have a fantastic project here. Which is assumes that in fact there are places around the world where there is no power at all. And so this is solar. We got a project called Solar Spell, which is a computing device with a Raspberry Pi compute on a, with a solar panel form factor, which allows you to, have access to libraries of educational content, including language models, generative AI language models that allow you to use an any number of languages, have dialogues.
[00:32:49] And conversations as much as you and I would experience, but it’s sitting, on a form factor that basically, is using solar, energy. it’s these, the language models are sub 1 billion parameter. They sit, as a card in, on the same form factor plugged into, the Raspberry Pi. And now we’re even doing it for offline use of mobile phones. That is say, simply SD cards. that can be utilized, when you’re offline and only, so my point in all this is if that, if the issue is how to be, have a, an ethical approach to the ways in which we want to, support the environment and not, instead of, or, and we want to continue to use these powerful tools. Then let’s lean into and create entrepreneurial ways of creating robust solutions that meet the needs, of students who have got that conviction and commitment, as well as the use cases where humans can be, can participate in the education journey. By actually unlocking this kind of value.
[00:33:54] CO-HOST NICOL TURNER LEE: No, and I think that’s important, right? Because, because of these concerns, it is, I think, what you’re telling us is it is important to have a participatory model at the university, particularly since you’re gonna have a range of learners, a range of disciplines, and a range of, opportunities and concerns of the technology.
[00:34:11] But I also like how you’re talking about, ways in which you can disaggregate the computing, power. In ways that make sense for the application or the community that’s using it. this sort of gets me back to this question that we started with, which is, the internet, digital access, more traditional digital access in fact, and AI, right? Because I always look at AI Lev as like these application layer where you’re able to engage in some unique. Opportunities, in various sectors and various, functions, et cetera. But at the end of the day, you still need internet access to connect to the devices to be able to run some of these models. Am I correct about that or I’m not, ’cause I’m technical, but I’m just trying to think about, how we still deal with this internet. that is still somewhat tethered right. To AI because at some point it’ll be disaggregated and more singular, I assume, but just curious how you deal with that earlier challenge that we talked about.
[00:35:13] GUEST LEV GONICK: Yeah. so again, this is l the technologist chatting with you… the moment that we’re in. Again, I, just wanna underscore how early on we are in the AI economy and the AI technology stack. we are starting from massive compute infrastructure, that is, obviously consuming, all the air in the room about conversations, but it’s also consuming masses, amounts of power, energy, and obviously the circulation of dollars, to support this environment. But it is very much, informed by Big Iron. think of that, in our era a around, mainframe computing. and you have to connect to the [00:36:00] mainframe. In order to get into the computing environment today, you have to connect through very big pipes, running at speeds that, would boggle the mind, for the training models that are out there for the frontier, providers.
[00:36:12] And these are actually being measured, not in, megs, not in gigs, but imp, petabytes, effort, sorry, terabytes rather, terabytes of throughput, in order to make these things happen. So what will happen? Sure as, day follows night because this is the way the technology, evolves. There will be, not only edge that is say offline, models. So for example, we’ve been, working, with Nvidia right now to take a, a product that, they call Spark, which is DCX Spark from Nvidia. It is basically, the size of a brick. It is, offline. It supports, very substantial, size, language models and it can again, support, what is basically a supercomputer, on your desktop. Connected or not connected, to the internet. And that that’s the beginning of a whole new generation, which is not going to, supplant, the big iron activities, but will, develop into this sort of spectrum of ways in which AI is gonna get utilized to the point where, again, I think it’s going to be.
[00:37:32] All the way down to a small SD card that sits in, either a Raspberry Pi or on your mobile, phone that is there. And so internet connectivity is hugely important, in the overall scheme. but what is going to happen is there will be, I think, all kinds. Of, core capabilities being built into appliances in our homes?
[00:37:57] CO-HOST NICOL TURNER LEE: Yes.
[00:37:58] GUEST LEV GONICK: Televisions are already beginning to see it. Obviously other home appliances. Our cars are already, here, in, in Phoenix and elsewhere, four or five other cities around the country. we have physical AI already in play here, with, Waymo, a Google company that, provides, literally there are 1200 Waymo cars. Autonomous, no driver.
[00:38:22] CO-HOST NICOL TURNER LEE: Yeah.
[00:38:22] GUEST LEV GONICK: Cars racing all around the valley here every day. Yes, there is internet connectivity for those, but actually that is not actually how physical AI is working. Physical AI is working with lidar, and super computing capabilities built right into the automobile so that it’s a very rich, very robust, very generative moment in which, the internet. Part of AI will be important for 70% of the market use cases, and the other 30% will be all these other fantastic opportunities, including physical AI that will be out there, some of which will have little internet requirements. Some of them will have, probably have to have much more robust internet access. But some of them will have no internet access at all as part of the experiences that, that we will be having. And, again, those are very quickly. creating, I think value, and hopefully value for the community, whether again, you’re a senior citizen and you don’t drive and you still wanna be able to get to, the doctor’s office or otherwise, or my special needs child, wants to get to work. And, jumping in a Waymo is one way of continuing to maintain your autonomy. In ways that, we are seeing real uptake here.
[00:39:47] CO-HOST NICOL TURNER LEE: Yeah. And I think that’s gonna be, maybe a conversation. Love we should have about a joint conference, right? Because we just had something similar to this at Brookings on the future of the internet in the age of AI. I guess to your point, I, also predict a world where there’ll be just much, many, more disaggregated points of access. Like you said, your refrigerator, your autonomous vehicle, et cetera. Or is this one in which we need to make sure like you’re doing at ASU, that there are communities of practice so that we train more people on how to use these in their relevant sectors.
[00:40:19] So if I wanna do law, I’m now at ASU, I’m learning how this works in law, so I’m not necessarily, dislocated or. Outsize out of this opportunity, if I’m a philosopher or in the humanities, I actually see a place for AI in the work that I do. is that like your ultimate like goal of doing all this stuff to ensure that it aligns with learners, aspirations and make sure that they stay included? Because I do worry, Lev, I’m not gonna lie. Part of what we went through was more of like an access challenge to the hardware and all this other stuff, if you remember It seems like with ai, it’s really gonna be a knowledge, barrier if you do not have these skills.
[00:41:00] GUEST LEV GONICK: The central issue, which again, I think invites us to have more conversation, Nicol and again, convenings about this is actually related to the workforce.
[00:41:09] CO-HOST NICOL TURNER LEE: Yes.
[00:41:10] GUEST LEV GONICK: Whether you’re preparing for the first job, which is part of what we do in our, part of the. Supply chain, if you will, in terms of, our role in education and mostly, or you’re returning to education as a way to upskill or retrain the disruption that is going to, that is already beginning to unfold and will continue to significantly inform, the, the challenges and the opportunities going forward is a program that connects institutions like mine. To the communities around us in the most profound way in our era. When you and I began in this work, we, and we understood how important the internet was going to be for workforce, needs, as well as essentially for a basic, idea of being [00:42:00] literate, and a, citizen, in the internet age, in the age of AI.
[00:42:05] This takes on a much, I would say a much more, urgent call for. community, engagement, and for solidarity work and for building, allies, across, the full diversity of the community around us because the disruption that is not simply going to be, in one or another, layer in the sort of socioeconomic. Sort realities of this, of our society, many different parts. Whether you’re a blue collar worker or whether you’re, you are a white collar worker or any number of other, different parts, of the workforce. And this then is an opportunity for us, to engage, in the, our commitments to, poverty alleviation, to, opportunities to unlock value and to unlock, opportunities for folks, to help, returning veterans to help, refugees get resettled, and gain access, not just to the physical tools required to. be a informed participant, in the 21st century, but also to have access to, the literacies required to be, an active participant, and have a meaningful engagement with the leading edge parts of the economy, because the divide between those who get on the AI economy and those who are left behind or are made to be left behind in terms of being, underdeveloped as an a, as actual an action that divide, I fear is going to be greater than anything we’ve ever seen in the digital divide debate and the digital divide, advocacy work that needs to be done. And, for that, I think organizations like ASU, have a fantastic opportunity to create a proof point on the art of the possible.
[00:44:08] CO-HOST NICOL TURNER LEE: Yeah, I agree. this has been fantastic. Part of what you’re doing is Hey, let’s bring agency back into higher education, but let’s do it in a way where it makes sense for our ultimate goal of training the next generation of leaders, and most importantly, let’s have it as a participatory model, which I think is so interesting in terms of the conversations that we have been in you and I around like bias and where there are breakdowns in some of the consequential fears. Don’t get me wrong, we still have those fears, but I think what you’re laying out is a framework. For just more engagement in a, critically, important and safe manner, which, is not always discussed among your colleagues just saying. So with that, I wanna thank you Lev for joining me for the Tech Tank Podcast. Before we, end this, also let people know where they can find out more information about the ASU Enterprise Technology, particularly, the program that you put together, ASU Create AI.
[00:45:11] GUEST LEV GONICK: Yeah, please anytime, take a look at ai.asu.edu. It is a window into, everything that we’ve discussed today. It’s also a window into hundreds of stories of the ways in which our students, faculty, and staff are engaged with, AI across the full breadth, of the institution.
[00:45:33] CO-HOST NICOL TURNER LEE: I love it. thank you so much, Lev, for joining us.
[00:45:36] GUEST LEV GONICK: Thank you Nicol, as always for inviting us and I look forward to the next opportunity.
[00:45:41] CO-HOST NICOL TURNER LEE: I know I hope to see you soon in, Arizona, ’cause it’s cold out here in Washington, DC. So listen folks, please explore more in-depth content on tech policy issues at Tech Tank on the Brookings website, which is available at www.brookings.edu.
[00:45:57] Your feedback matters to us about the substance of this episode. So leave a comment, let us know your thoughts, share it with someone else, and suggest other topics. We’re going into 2026. We wanna hear from you. This concludes that, another insightful episode, where we make bits into palpable bites. And until next time, my friends, thank you for listening.
[00:46:24] Thank you for listening to Tech Tank, a series of round table discussions and interviews with technology experts and policymakers. For more conversations like this, subscribe to the podcast and sign up to receive the Tech Tank newsletter for more research and analysis from the Center for Technology
-
Acknowledgements and disclosures
Google is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
PodcastUniversities tackled digital inclusion—now they are accelerating AI use | The TechTank Podcast
Listen on
January 22, 2026