Full-Time

Tier 2 Information Technology Technician

Posted on 9/18/2025

Eurasia Group

Eurasia Group

201-500 employees

Global political risk research and consulting

Compensation Overview

$85k - $100k/yr

New York, NY, USA

Hybrid

Category
IT & Security (1)
Required Skills
Microsoft Azure
Management
Word/Pages/Docs
iOS/Swift
Excel/Numbers/Sheets
PowerPoint/Keynote/Slides
Requirements
  • A love for all things technical; you spend most of your day helping people and your downtime learning about new technologies
  • A deep technical foundation and real-world experience working with mid-market organizations
  • Strong customer service skills
  • Ability to work with all levels – from assistant to executive
  • Written and verbal communication skills to understand issues employees have and to explain technical information in clear terms
  • Outstanding communication skills, technical expertise, and successful project delivery
  • Time-management skills to handle appointments efficiently and stay on schedule
  • Problem-solving skills to diagnose problems with malfunctioning hardware/software
  • Microsoft Admin
  • Office 365
  • Active Directory
  • Autopilot
  • Azure/InTune Management
  • Platforms Supported
  • Windows 10/11
  • Macintosh OS 13/14
  • iPhone/iPad – iOS 16/17
  • Network Knowledge
  • Windows Server
  • Cisco Routers/Switches
  • Cisco Firewall
  • Security Knowledge
  • Microsoft Defender
  • Trellix
  • FIDO
  • Hardware and software installations/deployments
  • Hardware support
  • Apple Macintosh/iPhone/iPad
  • Dell Latitude
  • HP/Canon printers
  • Software support (partial list)
  • Microsoft Excel, OneDrive, Outlook, PowerPoint, SharePoint, Teams, Word
  • Adobe Acrobat
  • Chrome
  • Datto
  • Exclaimer
  • Grammarly
  • Slack
  • IT Ticketing
  • Service Now
Responsibilities
  • Performs basic problem-solving and assistance on various software applications and hardware systems for department users
  • Assists with onboarding and offboarding employees/consultants - domestic and international
  • Assists with the installation, configuration, and maintenance of computers, workstations, and other related equipment and devices
  • Experience with Microsoft Security Capabilities
  • Provide individual instruction and training to users on new or updated technologies
  • Maintains and updates record-keeping system; may document projects and maintain user instructions
  • Can thrive in both team and independent work environments
  • Serving as a technical liaison to both internal and external clients
  • Responding to employee support inquiries in a timely fashion with an emphasis on being informative and helpful
  • Learning and broadening technically to add value to both employees and external clients
  • Assists with tracking inventory levels of equipment and materials
  • Communicating and working with team members to coordinate efforts to support employees
  • Monitoring and reporting any issues with critical environmental, networking, security, and server systems
  • Communicating with other departments to assist in the execution of change management policies and procedures
  • Document procedures/technical writing
  • Work in partnership with firm’s technology vendors to successfully deliver projects on time and within budget
  • Evangelize new or emerging technologies
  • Partner with other EG employees to understand their business goals and strategies and make recommendations for solutions
Desired Qualifications

Eurasia Group is a global political risk research and consulting firm. It analyzes political and policy developments to help clients understand how politics affect markets, investments, and strategy. Its product is analysis and advisory services delivered through research reports, country risk profiles, scenario planning, and tailored briefings produced by research analysts who are trained social scientists with deep regional expertise and language skills. The firm differentiates itself with a worldwide network of experts and offices in major cities, enabling access to diverse on-the-ground insight and a wide range of perspectives. Its goal is to help clients make informed business decisions in politically sensitive or unstable environments.

Company Size

201-500

Company Stage

N/A

Total Funding

N/A

Headquarters

New York City, New York

Founded

1998

Simplify Jobs

Simplify's Take

What believers are saying

  • Energy transition acceleration drives demand for geopolitical risk analysis among institutional clients.
  • Sustainability council expansion with Suntory demonstrates recurring revenue from corporate advisory programs.
  • Expertise in China policy and European energy positions firm for strategic client growth.

What critics are saying

  • 2024 US election prediction failed to materialize, eroding credibility with institutional clients.
  • AI-powered geopolitical analysis platforms commoditize core forecasting value proposition within 24-36 months.
  • Client concentration in energy sector creates revenue vulnerability if transition accelerates unexpectedly.

What makes Eurasia Group unique

  • Senior hires from Norwegian energy policy and US State Department enhance geopolitical expertise.
  • White paper on aluminum industry green transition positions firm as sustainability advisor.
  • Top Risks 2026 report provides predictive insights on US political revolution and emerging risks.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Professional Development Budget

Company News

Aluminium International Today
Sep 19th, 2025
Alcoa releases white paper on economic competitiveness and the green transition

Alcoa and Eurasia Group have published a new white paper titled: "Competitiveness & Green Transition in the Aluminum Industry: Finding Synergies or Facing Trade-Offs."

The Japan Times
Feb 8th, 2024
Why the eyes of the world will be on the U.S. presidential election

Eurasia Group, a U.S.-based research firm, announced on Jan. 8 its top 10 risks in the world for 2024, ranking "the United States vs. itself" as the biggest risk.

PYMNTS
Jan 22nd, 2024
Imf Lays Out 5-Point Ai Regulation Action Plan

2023 saw governments around the world grapple with the commercial emergence of artificial intelligence (AI).From the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, to the European Union’s (EU) AI Act, China’s already implemented policies, and Japan’s “Hiroshima Process,” the world’s largest economies took their own distinct approach to balancing oversight of AI’s implications with support for its further innovation.2024 is already shaping up to be a year where national, and even supranational, policies are sharpened, signed, and implemented.But regulation of AI is a complex and evolving topic that involves various considerations — not the least of which is the fact that the technology knows no borders, putting a spotlight on global cooperation and coordination around industry standardization, similar to frameworks that apply to financial regulations, or to cars and healthcare.In the latest discussion around the regulation of the technology, the International Monetary Fund (IMF) has laid out an action plan for AI governance in a report entitled “Building Blocks for AI Governance.”Authored by AI pioneer Mustafa Suleyman and risk consultant Ian Bremmer, the report outlined five guiding principles “to govern AI effectively,” noting that: “If the Cold War was punctuated by the nuclear arms race, today’s geopolitical contest will likewise reflect a global competition over AI.”After all, AI represents an innovation that can impact nearly every facet of modern life. That means that AI governance is not just a single, linear problem to be solved, and AI can’t be dealt with on the basis of previous technological oversight because AI is unlike any previous technology.Already, the IMF has noted in a separate report that up to 60% of jobs in advanced economies will be impacted by AI.Read also: How AI Regulation Could Shape Three Digital EmpiresLearning How to Manage and Govern AIMany western observers believe that an ongoing process of interaction between governments, the private sector, and other relevant organizations is necessary for AI regulation to be effectively implemented. And few believe that effective oversight of AI — meaning a framework that supports innovation while negating AI’s risks — will be possible to achieve with a single piece of legislation.“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania law professor Cary Coglianese told PYMNTS earlier this month as part of the “TechReg Talks” series. “It’s not one static thing.”“AI’s unique characteristics, coupled with the geopolitical and economic incentives of the principal actors, call for creativity in governance regimes,” wrote Suleyman and Bremmer for the IMF.Because of the rate and speed at which the capabilities of AI systems are evolving, the present moment represents an increasing time of urgency for businesses, governments and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.“Any idea that regulation is going to be globally ubiquitous is a fool’s errand,” Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS in November.He suggested an approach where regulations primarily target use cases, with technology and security measures tailored to specific applications, arguing for example that within healthcare, the existing regulations, such as HIPAA, provide a strong foundation.See more: US Eyes AI Regulations that Tempers Rules and InnovationThe five guidelines published by the IMF call for the qualities of effective AI oversight to include being:Precautionary, as in weighted toward AI’s potentially catastrophic downsides;Agile, as in capable of responding in-turn to AI’s rapid advances;Inclusive, as in collaborative and not dominated by any one actor, public or private;Impermeable, as in providing no avenue for exit from compliance;and Targeted, as in modular and adaptable rather than one-size-fits-all.Elsewhere on the AI regulation front, the House Financial Services Committee last week formed a working group to examine the effect of AI on the financial services and housing industries, while Senate Majority Leader Chuck Schumer has gone on the record saying that action on AI needs to come from Congress, not the White House.When it comes to adherence with existing laws around AI, Cornell University found that just 18 out of 391 companies in New York City had disclosed the impact of AI on their hiring decisions in accordance with an ordinance passed six months ago.For all PYMNTS AI coverage, subscribe to the daily AI Newsletter

VentureBeat
Jan 15th, 2024
Ai And Policy Leaders Debate Web Of Effective Altruism In Ai Security | The Ai Beat

Last month, I reported on the widening web of connections between the effective altruism (EA) movement and AI security policy circles — from top AI startups like Anthropic to DC think tanks like RAND Corporation. These are linking EA, with its laser-focus on preventing what its adherents say are catastrophic risks to humanity from future AGI, to a wide swath of DC think tanks, government agencies and congressional staff. Critics of the EA focus on this existential risk, or ‘x-risk,’ say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity. Since then, I’ve been curious about what other AI and policy leaders outside the effective altruism movement — but who are also not aligned with the polar opposite belief system, effective accelerationism (e/acc) — really think about this. Do other LLM companies feel equally concerned about the risk of LLM model weights getting into the wrong hands, for example? Do DC policy makers and watchers fully understand EA influence on AI security efforts? At a moment when Anthropic, well known for its wide range of EA ties, is publishing new research about “sleeper agent” AI models that dupe safety checks meant to catch harmful behavior, and even Congress has expressed concerns about a potential AI research partnership between the National Institute of Standards and Safety (NIST) and RAND, this seems to me to be an important question. In addition, EA made worldwide headlines most recently in connection with the firing of OpenAI CEO Sam Altman, as its non-employee nonprofit board members all had EA connections. What I discovered in my latest interviews is an interesting mix of deep concern about EA’s billionaire-funded ideological bent and its growing reach and influence over the AI security debate in Washington DC, as well as an acknowledgement by some that AI risks that go beyond the short-term are an important part of the DC policy discussion. The EA movement, which began as an effort to ‘do good better,’ is now heavily-funded by tech billionaires who consider preventing an AI-related catastrophe its number one priority, particularly through funding AI security (which is also described as AI ‘safety’) efforts — especially in the biosecurity space.  In my December piece, I detailed the concerns of Anthropic CISO Jason Clinton and two researchers from RAND Corporation about the security of LLM model weights in the face of threats from opportunistic criminals, terrorist groups or highly-resourced nation-state operations. Clinton told me that securing the model weights for Claude, Anthropic’s LLM, is his number one priority. The threat of opportunistic criminals, terrorist groups or highly-resourced nation-state operations accessing the weights of the most sophisticated and powerful LLMs is alarming, he explained, because “if an attacker got access to the entire file, that’s the entire neural network.”RAND researcher Sella Nevo told me that within two years it was plausible AI models will have significant national security importance, such as the possibility that malicious actors could misuse them for biological weapon development. All three, I discovered, have close ties to the EA community and the two companies are also interconnected thanks to EA — for example, Jason Matheny, RAND’s CEO, is also a member of Anthropic’s Long-Term Benefit Trust and has longtime ties to the EA movement. My coverage was prompted by Brendan Bordelon’s ongoing Politico reporting on this issue, including a recent article which quoted an anonymous biosecurity researcher in Washington calling EA-linked funders “an epic infiltration” in policy circles. As Washington grapples with the rise of AI, Bordelon wrote, “a small army of adherents to ‘effective altruism’ has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.” Cohere pushes back on EA fears about LLM model weightsFirst, I turned to Nick Frosst, co-founder of Cohere, an OpenAI and Anthropic competitor which focuses on developing LLMs for the enterprise, for his take on these issues. He told me in a recent interview that he does not think large language models pose an existential threat, and that while Cohere protects its model weights, the company’s concern is the business risk associated with others getting access to the weights, not an existential one. “I do want to make the distinction…I’m talking about large language models,” he said

Yonhap News Agency
Jan 8th, 2024
Eurasia Group calls N. Korea, Russia, Iran 'axis of rogues' in 2024 'top risk' forecast

Eurasia Group released the "Top Risks 2024" report that listed as the "biggest challenge" for the year "the United States versus itself" in the November presidential election that it said will be "by far the most consequential for the world's security, stability, and economic outlook."

INACTIVE