Sections

Commentary

Podcast

What to expect from the India AI Impact Summit | The TechTank Podcast

February 17, 2026


  • The India AI Impact Summit will focus on AI deployment and real-world impact, featuring an AI expo and technical demonstrations. 
  • As the first major AI summit in this series hosted in the Global South, the convening reframes global AI governance around who benefits from AI and under what conditions. 
AI summit
Visitors walk past a banner featuring India's Prime Minister Narendra Modi (L), as they arrive to attend the AI Impact Summit in New Delhi on February 17, 2026. (Photo by Arun SANKAR / AFP via Getty Images)

TechTank, a biweekly podcast from the Center for Technology Innovation at Brookings, explores today’s most consequential technology issues. Moderators Nicol Turner Lee and Darrell West speak with experts and policymakers to share data, ideas, and policy solutions that address the challenges of our digital world.

India will host the latest of a series of global AI summits this month, marking the first of these convenings to take place in a Global South region. The multistakeholder event, named the AI Impact Summit, will center “People, Planet, and Progress” and, according to Indian officials, is intended to move “from principles to practice.”  

The summit will feature keynotes, panels, a research symposium, and an expo of deployable AI applications—all with a focus on impact. As the summit unfolds, we will see how these discussions may drive the global governance agenda forward and how high-level conversations pave the way for the technical work of standards, assurance, and system evaluation. 

On this episode of the TechTank podcast, research analyst, lead author of the Brookings paper “Is AI sovereignty possible? Balancing autonomy and interdependence,” and guest host Brooke Tanner, is joined by Cameron Kerry, Ann R. and Andrew H. Tisch Distinguished Visiting Fellow in the Center for Technology Innovation at Brookings, and Elham Tabassi, director of the Artificial Intelligence and Emerging Technology Initiative at Brookings, to discuss the most important global governance issues ahead of the summit and their expectations for what it may bring. 

Listen to the episode and subscribe to the TechTank Podcast on Apple, Spotify, or Acast.   

Transcript

[00:00:00] CO-HOST NICOL TURNER LEE: You are listening to Tech Tank, a biweekly podcast from the Brookings Institution exploring the most consequential technology issues of our time, from racial bias and algorithms to the future of work Tech Tank takes big ideas and makes them accessible.

 

[00:00:24] GUEST HOST BROOKE TANNER: Welcome to the Tech Tank Podcast. I am Brooke Tanner, research analyst at the Center for Technology Innovation. I’m filling in as the guest host for this episode. India is hosting a global AI impact summit in New Delhi from February 16th through February 20th convening heads of states, ministers, senior policy makers, industry CEOs, researchers, startups, and civil society to address both AI opportunities and AI divides.

 

[00:00:51] The summit is positioned as the next stop in a recent sequence of global AI summits following the 2023 UK AI Safety Summit, the 2024 AI Innovation Summit in Seoul, and the 2025 AI Action Summit in Paris. Organizers have described the India AI Impact Summit as the first major global AI summit of this series to be hosted in the global south and is intended to shift the global AI conversation from principles and pledges toward implementable cooperation and measurable public value. [00:01:20] This year, the summit will feature keynotes, panels in exposition of deployable AI applications and a research symposium. The last day of the summit coincides with the global partnership on artificial intelligence GPA Council meeting, an international initiative hosted by the organization for economic Cooperation and development. [00:01:39] Today I am joined by two distinguished guests, Cameron Kerry, Ann R. and Andrew H. Tisch Distinguished Visiting Fellow for the Center for Technology Innovation at the Brookings Institution, and the co-founder of the Forum for Cooperation on AI here at Brookings, and Elham Tabassi, director of the Artificial Intelligence and Emerging Technology Initiative, and senior fellow in the Global Economy and Development Research Program at the Brookings Institution. [00:02:05] Cam and Elham, thank you so much for joining me.

[00:02:09] GUEST ELHAM TABASSI: Thanks for having us.

 

[00:02:11] GUEST CAMERON KERRY: It’s great to be here. Thank you, Brooke.

 

[00:02:14] GUEST HOST BROOKE TANNER: You’re both attending the summit this week. What are you most looking forward to? What should we expect Cam going into it?

 

[00:02:23] GUEST CAMERON KERRY: I think, look, this is an international crossroads of AI that really pulls together lots of people across the private sector, governments, civil society organizations. So like any kind of convention, it is about really a lot of the people involved, the conversations that you get to have alongside the, programming. So there, there’s a lot to learn. a lot of opportunity to build networks, ask questions, and, make connections.

 

[00:03:00] GUEST ELHAM TABASSI: Yeah, on, on my part, what I am looking for in terms of the conversations, in addition to what Cam said about meeting people, hearing about what they’re doing, but also this archive, this is the sort of the fourth, if we also count the, Seoul summit in the range of the summit and what we had seen is an expansion of the agenda. The first one started by focusing on the safety at the Bletchley Park in the UK, in and with the, purpose of getting the global actors aligned that advanced AI has, beneficial use, but it can also carries serious risks and shared risks across the, the different global, actors. And that was a necessary grant work that we had to do this. But over the years, as the, participation grow, as the conversation grow, more countries, more stakeholders wanted to be part of this, discussions about, innovation. But also deployment and diffusion, because that’s really where we are gonna see the beneficial use of AI. So getting the conversations from just. What to prevent and be mindful about what to prevent, which is extremely important. But taking it to, how to build, how to scale, how to actually ensure that AI is impacting everybody’s, life in a beneficial use. So this arc of awareness about risk, going to capability building, thinking about impact, and from my point of view accountability to make sure that the, we are achieving what we want to do, and minimizing the negative impact is something that I will be looking into the, through the conversations.

 

[00:04:43] GUEST HOST BROOKE TANNER: Great. I love how you framed how the focus has shifted over the years. Cam, do you think that this change of the focus for the summit to impact is going to change the type of questions that policymakers are asking at the summit this year?

 

[00:04:58] GUEST CAMERON KERRY: I do think so. Brooke, I think we’ve seen that over the arc that Elham described, and particularly at the much larger, more broad based summit last year in, in Paris, which certainly started I think a shift from safety. To deployment and diffusion. And I think we’re seeing that in the focus of the India Summit, it is really looking at, I think, practical applications. India I think has been very pragmatic about how it is approaching AI and I think it wants to make that a feature of this summit. And I think that reflects. I think we’ve seen in, some of our Forum for Cooperation on AI discussions in terms of approaches. To, to AI and, AI risk that there are important, but somewhat abstract and longer term issues about existential risks of various kinds. And that was certainly the focus at the first summit that the UK convened in 2023, but, for many countries that really want to be able to, enjoy the benefits of, AI, it really is about how can we deal with, more short term issues and how do we get our hands on it? [00:06:35] How do we benefit, how do we participate in this enormously important development of technology?

 

[00:06:44] GUEST ELHAM TABASSI: If I can double click on what Cam said. Question about how does it shift the policy makers and Cam talked about the application. All of that makes the policy questions more operational. So is it AI improving healthcare delivery? [00:07:01] Is it helping farmers access credit? I dunno, market informations are public services becoming more effective? And also embed the questions and focus on the question of who is benefiting, who is not. And I think that’s an important shift and that’s important focus to have. Again, when we look at the archive, moving from the, safety lens of the answering the question of what could go wrong? How can we control it? How can we make sure that, we have control over bad outcomes and impacts, but at the impact lens, then we are more focusing on, what are we trying to achieve and how we will prove that’s working and it’s working, reliably. So that will also shift the attention towards enablers, data infrastructure, talent skills, connectivity, adoption.

 

[00:07:50] At, society level, at public sector level, and I think these are all good impact stuff.

 

[00:07:56] GUEST HOST BROOKE TANNER: India has framed the Impact Summit around three ideas, people, planet, and progress. Cam, do you think when thinking about impact at the summit, this framing is organizing familiar AI governance debates, or is this a signal that this is a different set of priorities compared to some of those more explicitly risk-based approaches?

 

[00:08:20] GUEST CAMERON KERRY: Yeah, look, I, think it goes back to what I said about the focusing on the practical applications. a big part of the follow on is gonna be an expo, an AI expo. So a lot of demonstrations around the program here. I think a lot of probably discussions about ways that governments, are actually putting [00:08:47] AI to use, much as India has made a big deal out of digital public infrastructure and computing that it does to support its government payment systems ID systems, proprietary systems, not open to the public, but that have a major role in the delivery of public services in India.

 

[00:09:12] GUEST HOST BROOKE TANNER: And I wanted to talk about one of the recent Forum for Cooperation on AI reports last year that discusses the importance of this interoperability and agility across AI governance regimes when building a global governance network of on AI governance approaches. As you look at India’s approach at going into the summit, is it a step to help move this interoperability between AI governance approaches forward?

 

[00:09:42] GUEST CAMERON KERRY: I think, I certainly hope so. There is a tension, I think, between interoperability and a movement towards, something that you’ve been working on, Brooke, Sovereign AI, the desire to have lots of the components of the AI stack within one’s own country and some of what India is, doing, is in that direction. And, many other countries and governments, from the European Union to governments, across the world, are looking at ways to do that and that has some risk of fragmentation, but if, it can be done in ways, that are interoperable, that are adaptive, that is very much the direction I think that we need to, be moving in the work that we did last year that you referred to was focused on the ways that AI development, and, regulation mirrors the global internet. And that has functioned really on interoperable protocols and networks and systems. and AI will certainly provide additional benefits if that can be the case in the way that AI operates.

 

[00:11:15] GUEST HOST BROOKE TANNER: I want to interrogate that tension you raised of interoperability and sovereignty, both being descriptions of AI governance that will be described in framing the discussion and some of the conversations going on. India has argued that the concentration of AI capabilities in a few countries and firms is itself a risk, which is motivating some of these sovereign AI initiatives and its government has made investments to lessen this dependence, including through new sovereign AI models developed domestically, which they plan to announce at the summit. And as part of the paper you referenced Cam, which should be published on the Brookings website this weekend. It was great to co-author that with both of you. Elham maybe to turn to you, do you expect those questions around more explicit sovereign AI strategies or infrastructures to be part of the conversation at the summit.

 

[00:12:14] GUEST ELHAM TABASSI: I think it’s very likely to, have that, look, it has been a consistent team in India. In India’s AI diplomacy and the things that we hear from them. And hosting this summit can give that visibility for them to talk about all of these things. In affirming of your question, you also talked about the concentrations and then also the sovereignty and dependency resilience that we want to get in a way, the core argument could be that when advanced AI capabilities is concentrated in small number of countries or small number of firms, most others. Become technology dependent or maybe even rule taker instead of being part of the conversations to shape the rules or having the foreign sovereignty or more control over their development or deployment. And this, obviously raises concerns about dependencies on external infrastructures, models, governance approaches, and so I think those conversations will be part of the India Summit because India and several other countries have framed as both as a development issue, and also a resilience issue. It’s a development issue because AI capacity affects economic opportunity. The diffusion and the option that you can take get, and a resilience issue because AI systems, built for a narrow set of contexts, may not transfer well globally. Case in point is, medical cases that trained on certain demographics may not work well, but we are also hearing about language and culture dependencies. And in a, country like India with many dialects, that become a, question of the access and usability of the models if they are not aligned with the language and useful for the people. So we can expect that discussions around. Broader access to compute, more inclusive, even standard setting and shared evaluations, or capacity building mechanism, or be part of the summit conversations. I just want to also add another point. You talked about the paper that is gonna come out, and again, it was a privilege and really enjoyed, working with both of you on writing that paper. There, there might be valid reasons to think about concentration to support, coordination, safety, investments, but the real debate is about balance but not redistribution and how we need to get it to the right space. And I think, summit will surface that, that tension and where the right, balancing point should be, rather than trying to fully re resolve it, right? This summit doesn’t fully resolve anything, they just surface the likely conversation. So I suspect that it’s gonna be part of the conversation.

 

[00:14:53] GUEST HOST BROOKE TANNER: Great. A lot of things to follow up there, but I’m glad you raised the standard setting aspect elham as that is very important and we’ve seen now a couple high level summit declarations that have these high level principles of trust and safety and being aligned on that but we all know on this call that standards bodies are, doing the quieter work of turning those ideas into something more operational. Right now, Elham, where do you see as the biggest gap between these higher level commitments and usable technical standards?

 

[00:15:28] GUEST ELHAM TABASSI: Yeah, thank you for that question. I come, from 26 years of working for National Institute and Standards of Technology. So standards would always have a near and dear place in my heart. and I’m really glad that we are, getting all these attentions about standard and the role of standards for innovation, for more responsible deployment, but also improving trade and all this. but where are the gaps? And that’s the ongoing conversations. We need testing standards. We need shared benchmarks and measurement methods that governments. Buyers, deployers entities that want to use AI can rely on. And that space still emerging. A lot of people are working on that. It’s a really technical challenge and scientific gap to come with those testing standards.

 

[00:16:11] another one that I want to point out, and that goes to back a, another paper that Cam, and Brooke and several others at Brookings work on that, on the transparency. And, reporting, clear reporting standards, we, look at the Hiroshima AI process and we come up with some recommendation there. And, bottom line is that, we see that a lot of agreement on that’s gonna be very helpful for the developers and deployers and all actors across the AI value chain to share informations about training data sources about, limitations, risks, the testing that they had done, the test data that they had used, how they did the test, what they learned through that test. While we have a lot of agreement on this in information sharing is good, what information exactly to share. And, at what level that can be useful for the different audience and in what format. So it can bring consistency and interoperability across is, not quite, answered or tackled. So that would be another set of standards that would be good work on that. Just staying with the three categories. I think another category that we will add. sort of the specifications, or maybe standards for the deployment. So a lot of the, frameworks and guidance that are out there are focusing on model itself. But what we need is focus on how is that the system, the model is gonna be used in real setting in the environment of use with real people workflows. The line of the governance accountability lines policies and procedures in place, and frankly, that’s where many of the failures actually happen. So paying attention to technology is important, but also paying attention to the people and the processes is also very, important. To summarize, we have top high level principles at the top and those conversations going and that’s good. We need to continue those conversations. We have technical research at the bottom. a lot of the research, institutions within the labs within the university are working on that. That’s great and good. but operational standard would actually be the middle layers that it’s gonna connect to principles and technical research and that’s still being built and that’s quite underdeveloped. And I will say that’s the implementation gap.

 

[00:18:29] GUEST HOST BROOKE TANNER: Great, thanks. And Cam, I wonder maybe you could speak to in, when we’re thinking about this implementation gap in standard setting and broader AI governance, we have seen more governments invest in some of this internal technical experience, but there are clearly still gaps as Elham has outlined. How important do you think that capacity building dimension. Is when trying to make these governance frameworks actually work in real cases as Elham was elaborating.

 

[00:19:00] GUEST CAMERON KERRY: Yeah, I think that’s hugely important, Brooke. Look, I think we see this in our work at Brookings as scholars, we are all having to learn how to adapt to, AI, how to incorporate it into the, work we do you know what the the strengths, the, weaknesses, what the applications, are. And I think that is, is very much needed really across all sectors in government, particularly where there, I think are significant gaps in expertise and I think challenges on deploying the technology just for security, for economic reasons. it’s harder to just play around with AI if, you are in government, but that is something that we all need to be doing. But is it’s, happening in big ways in the private sector. As people look at applications, governments need to be doing the same, and I think that’s looking at this internationally where the development issues are loom enormously large, there’s a tremendous amount of capacity building, talent building that need to be done. And I think for many countries the disparities and the development in AI and the reasons there are concentration have to do. Certainly with not just wealth, but with talent. And I think the places that are, where AI is being adopted the most are the places where there’s the greatest level of talent and training that needs to expand broadly. And that, I think is, alongside communications infrastructure is the core of. The development and the developmental issues around the world.

 

[00:21:15] GUEST HOST BROOKE TANNER: Thanks. So thinking about the location, it is very relevant in setting the tone of discussions and organizers have described this summit as the first major global AI summit of this series to be hosted in the Global South. What does that shift in? The location change in agenda setting power. And do you also see continuity, Cam?

 

[00:21:43] GUEST CAMERON KERRY: I think it affirms the, a trend that, that we’ve seen and we’ve talked about a little bit about the broadening of the, AI discussion around the world, and that’s happened at the United Nations. It happened over the course of the, previous summits and I think the holding it in India, it really punctuates that. And India plays an interesting role as a geopolitical player. It always has it in the sense that it’s been a non-aligned country. Classically, clearly goes its own way. And it’s recently concluded trade agreements with the U.S., with the EU, but it continues to trade insignificant degrees with China and with Russia. It is certainly heavily using and building on top of U.S. AI models. It’s also using China’s deep seek. So India’s role in in this summit will be to build on its practical approach, the ways that, it’s being pragmatic in what models it uses and how it deploys AI and how it puts it to, to work. I think that’s going to be very much, a theme of the summit and a feature of India’s role.

 

[00:23:24] GUEST HOST BROOKE TANNER: This is a really transnational issue, not only just the AI deployment, but the AI governance. And one reoccurring challenge in these summits is that there is this question of accountability across the AI infrastructure and deployment, especially when the developers, deployers and users are spread across countries and jurisdictions. Elham, how do you think policymakers are thinking about that accountability problem right now?

 

[00:23:56] GUEST ELHAM TABASSI: That’s one of the toughest governance problems because responsibility, as you said, is distributed across borders, across AI value chain. A system might be built in one country, adopted in another country, deployed by a third company, used globally, where the data has been sourced across the globe or maybe, concentrated from part of the globe. So it’s really, as you said, transnational. And the question as you pointed out, is that when a negative impact or harm occurs. Accountability is not very obvious with all of these different actors across the value chain responsible being part of that, and this is one of the focus of the work of the AI and Emerging Tech at Brookings that we are working on to is, as I call them, end-to-end full stack governance issue because we cannot solve it at the siloed layer of the AI value chain because these layers are, there’s interdependencies and, and. touching each other’s, you ask from the policy angle. I just want to use this time to say that there is a lot of technical and procedural questions to, address that, which gonna be a, topic for another podcast, but from the policy point of view seems to me that the policymakers around the globe are now testing, trying out three different approaches, at least three different approaches.

 

[00:25:13] One of them is putting the primary responsibility on the developers on the deployers to start there. Scope with the, flow of the chain. So the actors that are closest to the real world use and understanding the context of use are supposed to have a better understanding of the limits capabilities and risks. So the primary responsibility is being put on the deployer in this sense. Another one can actually go the other side and put the baseline obligation on the developers of the frontier models, because anything else is gonna be, or the argument goes that anything else is gonna be based on that. and that approach requires testing, safeguards and disclosures before release, before making the models available for the deployer to use. And we are seeing examples of that around the world and probably the U.S. policy proposals. And the third approach focus on. standards alignment and mutual recognition. So if we have agreed upon specifications and standards on what are the characteristics we want to see in the systems agreed open, test methods and methodologies for testing. And even maybe better, we have certifications schemes. Then systems tested or certified in one jurisdiction can be accepted in others jurisdictions, in other region, which helps with reducing fragmentations bring a sort of a baseline understanding of what’s expectations of trustworthy design and development and responsible use or deployment. What’s happening today really is that most of the enforcement are still at national level. I. Don’t think we have much of the cross-border accountability tools available that are very, limited. And my hope is that international Summit can lay the groundwork on, shared definitions, on compatible standards, on technical cooperations, that should take place, so that the responsibility could be understood, as disputed across the AI value chain and also, what we can do to be able to actually trace that responsibility, in a clear way. These are not easy questions. We don’t have the solutions in front of ourselves. As I said, it’s not just a policy questions. We definitely need a lot of technical and scientific work towards that, and these are some of the things that we are working on that working.

 

[00:27:40] GUEST CAMERON KERRY: You, you covered a lot of ground there, but you, did talk earlier about transparency systems and I think one of the things we’ve seen come out of international efforts is the Hiroshima AI principles and reporting process. Yep. Which comes out of work done in, the G7 and a code of conduct in, in 2023 that then last year at Paris, the, OECD and governments came together and put out a reporting framework and the report that we put out with CDT really I think provided some roadmap for strengthening the reporting, strengthening the accountability, and something that can, be built on going forward. We need many channels doing, this and developing the measures, looking at the reporting and building the ecosystems of accountability around the world.

 

[00:28:45] GUEST ELHAM TABASSI: Can’t agree more.

 

[00:28:46] GUEST HOST BROOKE TANNER: Great. Thanks Cam. This discussion has given a great overview of the opportunities for the summit and some of the questions that I’m sure you both will keep in mind during those conversations. For our listeners who might not be listening to every single panel at the AI Impact Summit this week, what do you think they should be watching for as outcomes to understand where it might have long-term influence or success? Elham, what do you think?

 

[00:29:19] GUEST ELHAM TABASSI: Yeah, that’s actually a very good question because these summits are really important, convening, but what happens next after them is a bigger question. So the simple test is whether anything continues. After the headlines fade, after everybody leaves India, go back home. Then what happens in international tech, governance success shows up in the follow through, not just the declarations after the summit. So in unpacking what are, what those follow throughs can be, one of them can be at the institutional level that do any. Lasting mechanism may come out of this ongoing working groups shared evaluation efforts, funding commitments. I’m not saying that we want to continuing working groups for the sake of working groups, but having objectives and purposes to towards the operationalizing and implementation.

 

[00:30:08] Some of the high level summit declarations would be good because declarations are common. Durable structures are the real indicator for operationalizing those declarations. I think the second thing is that how much of the optic in the language and the reflection of the summit, we are gonna, we are gonna see, and we are gonna read in the various forms, we started the conversations with the impact framing of the summit. Will this continue? Will this start appearing in other governments and standards and evaluations and many different networks, including the safety community evaluations and standards community? Why it’s important to stick with that framing? Because when the framing spreads, priorities usually follow. And again, going back to the beginning, we talked about the positive impact and potential beneficial use of AI is not guaranteed until we actually work on the implementations and deployment of that. And I think the last thing I will say is that, but after everybody leaves goes to their home country and institutions who actually stays in the room, if you will. Afterwards, you pointed out, and Cam talked about importance of this being the first summit in the global south. Will we see more emerging and developing countries get a more sustained roles in the conversations on governance, in the conversations and standard bodies, in the conversations on the technical evaluations, on all the scientific and technical challenges with achieving responsible deployment of AI, not just the summit participation. So how long are we gonna see their presence in the conversations? so I think that’s, where I stopped that. Beyond the participation and the conversation, what are the broader participations in technical work, in standard work, in governance and policy work that’s gonna happen between the two summits or the next summits what’s gonna happen?

 

[00:32:03] GUEST CAMERON KERRY: Yeah. Brooke, I think your question really asks, okay, it’s the Impact Summit. what is the impact that, that it should have? And I think Elham has really given the answers. It’s gonna be what does it do on the ground in terms of follow ups of standards, of measures, of adoption, of specific technologies? Opportunities on building the knowledge and the talent that ultimately are gonna lead, crowdsource a lot of these issues and lead us to some wisdom about artificial intelligence.

 

[00:32:51] GUEST HOST BROOKE TANNER: Great. Cam I love how you put that we’re looking for impact at the Impact Summit. thank you both so much for joining me to discuss this important topic, Cam and Elham, and safe travels to the summit this week.

 

[00:33:06] GUEST CAMERON KERRY: Thanks very much.

 

[00:33:07] GUEST ELHAM TABASSI: Thank you, Brooke, for bringing us together. Yeah, that was a very, good conversation.

 

[00:33:13] GUEST HOST BROOKE TANNER: Please explore more in-depth content on tech policy issues at Tech Tank on the Brookings site accessible at brookings.edu. Your feedback matters to us about the substance of this episode, so please leave a comment and let us know your thoughts or suggest topics you’d like us to discuss in future episodes. This concludes another insightful episode of the Tech Tank Podcast, where we make bits into palatable bites. I am Brooke Tanner, research analyst at the Center for Technology Innovation. Until next time, thank you for listening.

 

[00:33:49] CO-HOST NICOL TURNER LEE: Thank you for listening to Tech Tank, a series of round table discussions and interviews with technology experts and policy makers. For more conversations like this, subscribe to the podcast and sign up to receive the Tech Tank newsletter for more research and analysis from the Center for Technology Innovation at Brookings.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).