Full-Time
Posted on 10/31/2025
Global political risk research and consulting
$250k - $300k/yr
London, UK + 2 more
More locations: Washington, DC, USA | New York, NY, USA
In Person
| , |
Eurasia Group is a global political risk research and consulting firm. It analyzes political and policy developments to help clients understand how politics affect markets, investments, and strategy. Its product is analysis and advisory services delivered through research reports, country risk profiles, scenario planning, and tailored briefings produced by research analysts who are trained social scientists with deep regional expertise and language skills. The firm differentiates itself with a worldwide network of experts and offices in major cities, enabling access to diverse on-the-ground insight and a wide range of perspectives. Its goal is to help clients make informed business decisions in politically sensitive or unstable environments.
Company Size
201-500
Company Stage
N/A
Total Funding
N/A
Headquarters
New York City, New York
Founded
1998
Help us improve and share your feedback! Did you find this helpful?
Professional Development Budget
Alcoa and Eurasia Group have published a new white paper titled: "Competitiveness & Green Transition in the Aluminum Industry: Finding Synergies or Facing Trade-Offs."
Eurasia Group, a U.S.-based research firm, announced on Jan. 8 its top 10 risks in the world for 2024, ranking "the United States vs. itself" as the biggest risk.
2023 saw governments around the world grapple with the commercial emergence of artificial intelligence (AI).From the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, to the European Union’s (EU) AI Act, China’s already implemented policies, and Japan’s “Hiroshima Process,” the world’s largest economies took their own distinct approach to balancing oversight of AI’s implications with support for its further innovation.2024 is already shaping up to be a year where national, and even supranational, policies are sharpened, signed, and implemented.But regulation of AI is a complex and evolving topic that involves various considerations — not the least of which is the fact that the technology knows no borders, putting a spotlight on global cooperation and coordination around industry standardization, similar to frameworks that apply to financial regulations, or to cars and healthcare.In the latest discussion around the regulation of the technology, the International Monetary Fund (IMF) has laid out an action plan for AI governance in a report entitled “Building Blocks for AI Governance.”Authored by AI pioneer Mustafa Suleyman and risk consultant Ian Bremmer, the report outlined five guiding principles “to govern AI effectively,” noting that: “If the Cold War was punctuated by the nuclear arms race, today’s geopolitical contest will likewise reflect a global competition over AI.”After all, AI represents an innovation that can impact nearly every facet of modern life. That means that AI governance is not just a single, linear problem to be solved, and AI can’t be dealt with on the basis of previous technological oversight because AI is unlike any previous technology.Already, the IMF has noted in a separate report that up to 60% of jobs in advanced economies will be impacted by AI.Read also: How AI Regulation Could Shape Three Digital EmpiresLearning How to Manage and Govern AIMany western observers believe that an ongoing process of interaction between governments, the private sector, and other relevant organizations is necessary for AI regulation to be effectively implemented. And few believe that effective oversight of AI — meaning a framework that supports innovation while negating AI’s risks — will be possible to achieve with a single piece of legislation.“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania law professor Cary Coglianese told PYMNTS earlier this month as part of the “TechReg Talks” series. “It’s not one static thing.”“AI’s unique characteristics, coupled with the geopolitical and economic incentives of the principal actors, call for creativity in governance regimes,” wrote Suleyman and Bremmer for the IMF.Because of the rate and speed at which the capabilities of AI systems are evolving, the present moment represents an increasing time of urgency for businesses, governments and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.“Any idea that regulation is going to be globally ubiquitous is a fool’s errand,” Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS in November.He suggested an approach where regulations primarily target use cases, with technology and security measures tailored to specific applications, arguing for example that within healthcare, the existing regulations, such as HIPAA, provide a strong foundation.See more: US Eyes AI Regulations that Tempers Rules and InnovationThe five guidelines published by the IMF call for the qualities of effective AI oversight to include being:Precautionary, as in weighted toward AI’s potentially catastrophic downsides;Agile, as in capable of responding in-turn to AI’s rapid advances;Inclusive, as in collaborative and not dominated by any one actor, public or private;Impermeable, as in providing no avenue for exit from compliance;and Targeted, as in modular and adaptable rather than one-size-fits-all.Elsewhere on the AI regulation front, the House Financial Services Committee last week formed a working group to examine the effect of AI on the financial services and housing industries, while Senate Majority Leader Chuck Schumer has gone on the record saying that action on AI needs to come from Congress, not the White House.When it comes to adherence with existing laws around AI, Cornell University found that just 18 out of 391 companies in New York City had disclosed the impact of AI on their hiring decisions in accordance with an ordinance passed six months ago.For all PYMNTS AI coverage, subscribe to the daily AI Newsletter
Last month, I reported on the widening web of connections between the effective altruism (EA) movement and AI security policy circles — from top AI startups like Anthropic to DC think tanks like RAND Corporation. These are linking EA, with its laser-focus on preventing what its adherents say are catastrophic risks to humanity from future AGI, to a wide swath of DC think tanks, government agencies and congressional staff. Critics of the EA focus on this existential risk, or ‘x-risk,’ say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity. Since then, I’ve been curious about what other AI and policy leaders outside the effective altruism movement — but who are also not aligned with the polar opposite belief system, effective accelerationism (e/acc) — really think about this. Do other LLM companies feel equally concerned about the risk of LLM model weights getting into the wrong hands, for example? Do DC policy makers and watchers fully understand EA influence on AI security efforts? At a moment when Anthropic, well known for its wide range of EA ties, is publishing new research about “sleeper agent” AI models that dupe safety checks meant to catch harmful behavior, and even Congress has expressed concerns about a potential AI research partnership between the National Institute of Standards and Safety (NIST) and RAND, this seems to me to be an important question. In addition, EA made worldwide headlines most recently in connection with the firing of OpenAI CEO Sam Altman, as its non-employee nonprofit board members all had EA connections. What I discovered in my latest interviews is an interesting mix of deep concern about EA’s billionaire-funded ideological bent and its growing reach and influence over the AI security debate in Washington DC, as well as an acknowledgement by some that AI risks that go beyond the short-term are an important part of the DC policy discussion. The EA movement, which began as an effort to ‘do good better,’ is now heavily-funded by tech billionaires who consider preventing an AI-related catastrophe its number one priority, particularly through funding AI security (which is also described as AI ‘safety’) efforts — especially in the biosecurity space. In my December piece, I detailed the concerns of Anthropic CISO Jason Clinton and two researchers from RAND Corporation about the security of LLM model weights in the face of threats from opportunistic criminals, terrorist groups or highly-resourced nation-state operations. Clinton told me that securing the model weights for Claude, Anthropic’s LLM, is his number one priority. The threat of opportunistic criminals, terrorist groups or highly-resourced nation-state operations accessing the weights of the most sophisticated and powerful LLMs is alarming, he explained, because “if an attacker got access to the entire file, that’s the entire neural network.”RAND researcher Sella Nevo told me that within two years it was plausible AI models will have significant national security importance, such as the possibility that malicious actors could misuse them for biological weapon development. All three, I discovered, have close ties to the EA community and the two companies are also interconnected thanks to EA — for example, Jason Matheny, RAND’s CEO, is also a member of Anthropic’s Long-Term Benefit Trust and has longtime ties to the EA movement. My coverage was prompted by Brendan Bordelon’s ongoing Politico reporting on this issue, including a recent article which quoted an anonymous biosecurity researcher in Washington calling EA-linked funders “an epic infiltration” in policy circles. As Washington grapples with the rise of AI, Bordelon wrote, “a small army of adherents to ‘effective altruism’ has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.” Cohere pushes back on EA fears about LLM model weightsFirst, I turned to Nick Frosst, co-founder of Cohere, an OpenAI and Anthropic competitor which focuses on developing LLMs for the enterprise, for his take on these issues. He told me in a recent interview that he does not think large language models pose an existential threat, and that while Cohere protects its model weights, the company’s concern is the business risk associated with others getting access to the weights, not an existential one. “I do want to make the distinction…I’m talking about large language models,” he said
Eurasia Group released the "Top Risks 2024" report that listed as the "biggest challenge" for the year "the United States versus itself" in the November presidential election that it said will be "by far the most consequential for the world's security, stability, and economic outlook."