Connect with us

AI Research

Open call to apply for AlgorithmWatch’s reporting fellowship on AI and power

Published

on


The expansion of generative AI infrastructure, the influence of big tech executives in public administrations, and their meddling in regulation have become pressing topics. Therefore, algorithmic and AI accountability reporting is more relevant than ever. In the framework of its ongoing reporting fellowship, AlgorithmWatch is calling on journalists and researchers across Europe to join the new cohort of reporters to unveil automated injustice and investigate the relation between AI and power.

The new round of the fellowship will run from November 2025 to May 2026. During this period, AlgorithmWatch will provide financial support, mentoring sessions, and organizational resources. We expect our fellows to produce at least one journalistic story, audio or video feature, research report, or a similar output by the end of the fellowship. AlgorithmWatch will also support the fellows in the publication process.

Following the line of investigation of the previous cohort, we will focus on the power structure and the influence dynamics underpinning the use and development of AI. This includes a wide range of possible investigation topics around the impact of AI in society, including algorithmic discrimination. Possible research topics could be:

  • Influence of big tech corporations on politics and the drafting of legislation, e.g., identifying pressure groups that manage to slow down the development or limit the scope of specific regulations, such as the watering down of the EU AI Act.
  • Structural and systemic oppression of women, racialized people, or other vulnerable groups, exacerbated by the use of AI, e.g., automated software in public administration that systematically targets non-nationals in fraud detection.
  • Powerful industry leaders, such as big tech companies, facilitating the surveillance of groups, e.g., analyzing major contracts between regional or national governments and private firms that allow the mass scanning of citizens’ personal data with automated tools, such as computer vision algorithms or remote biometric identification (RBI).
  • Discrimination by automated systems in the fields of: health, education, finance, work, human resources, etc., e.g., public authorities using AI-powered apps to review welfare benefits.
  • Exploitation of data workers and algorithmic management platforms to systematically replace traditional labor models, e.g. hospitals automatically allocating care services to save employment costs; the erosion of creative freelance work caused by AI tools in the fields of translation, illustration, graphic design, marketing, etc.
  • The political impact of European decision-makers and lobbyists who promote longtermism and effective altruism ideologies.

Candidates are allowed to pitch a joint proposal with a research partner. In such a case, the money will be equally divided. We will also evaluate potential connections between proposals and may suggest shortlisted candidates to work together. In previous rounds, cross-border collaboration and joint research projects have proven to bring successful results, such as investigations on the expansion of data center infrastructure or algorithmic discrimination in the financial industry. The application deadline is 15 September 2025 23:59 CET.

We value stories that provide empirical cases of AI’s social impact to demonstrate how automated systems are already affecting people at an individual or collective level. We are also looking for data-driven stories on the impact of AI development, e.g., analysis of a given infrastructure’s energy consumption or automated systems’ performance.

We will host two Q&A sessions on Zoom to solve doubts and further questions on 20 August at 11:00 CET and 8 September at 18:00 CET. Please find the link to each meeting attached to the respective date.

What to expect

The fellowship will start on 10 November 2025 and end on 10 May 2026.

We will choose maximum 6 applicants that will receive a total of 7.400€ (gross) each to conduct their research. Candidates can present a joint proposal with a research partner. In this case, the money will be divided equally among the participants.

Fellows will be free to choose the media outlet for the publication and also decide whether they wish to sell their stories. They can otherwise be published on AlgorithmWatch’s platforms. The fellowship includes outreach support. Optionally, the fellowship will also provide mentorship sessions with AlgorithmWatch team members and external researchers in the algorithmic accountability field.

Fellows will also be invited to an in-person gathering in Berlin when the fellowship starts.

Who can apply

Any person above 18 is welcome to apply. We very strongly encourage persons from minoritized or marginalized groups and communities to apply.

Applicants do not need to have a background in computer science. Just like you do not need a degree in climate science to report on the climate crisis, the effects of automated systems can be researched by non-technical people. We do expect people who apply to be familiar with the algorithmic field and have experience with writing and working with journalists.

There are some specific requirements the applicants must fulfill:

  • Residence in a country of the European Union, or in an EFTA country (Iceland, Liechtenstein, Norway and Switzerland), or in a candidate country, or in a former country of the EU.
  • Written English at a B2 level in the Common European Framework of Reference for Languages.
  • very strong interest in the topic of AI accountability and automated decision-making.
  • A commitment to complete the research within the timeline of the fellowship and to deliver at least one journalistic product, such as an article, an audio or video feature, a report or similar.

How to apply

Please take into account the following guidelines before completing your proposal:

APPROACH TO THE RESEARCH

We are looking for journalistic research and stories that follow a narrative – as opposed to theoretical hypothesis or academic research. Practical, real-life cases will be positively valued.

Here we are asking you to provide an overview of the story you’d like to research, your research plan and goals. The central topic of the fellowship is the relation between AI and power structures, as well as the value chain it is built on. We are looking for research projects that:

  • Focus on the influence and power structure of AI.
  • Take place in Europe.
  • Take into account the impact of AI on society.
  • Bring new information to light, or that provide the point of view of people who are rarely given a voice in the debates on AI.

What we are not looking for:

  • Information on commercialisation and manufacturing of tech products, such as hardware (e.g. “Nvidia releases new version of microchips”).
  • Major tech announcements without researching its impact on society (e.g. “Meta plans to open a data center in X”).
  • Theoretical and/or academic research (e.g. “What is the political economy of AI”).

Please read the FAQ section listed below. If you have further doubts on whether your proposal fits the scope of the fellowship, please send us an email to bellio@algorithmwatch.org.

FAQs

Are you offering an employment contract?

No. The allowance is paid on invoices. If fellows are unable to invoice, we will work with them to find a solution.

Who will own the copyright to the reporting I do?

You will have to publish the work under a CC-BY license.

Will I work together will AlgorithmWatch?

Yes! AlgorithmWatch will coordinate the work of the fellows, and fellows will be invited to connect with other members of the organization.

Will I work together with other fellows?

Yes, it is an option and we strongly encourage applicants to propose joint projects. We will also hold at least one monthly meeting with all the fellows.

Will there be in-person meetings?

Yes. Fellows will meet together at least once, most likely at the beginning of the fellowship period.

Do you provide office space for fellows?

No.

Can I participate in the fellowship for less than 6 months?

We expect fellows to complete the full 6 months of the program, but we can offer some flexibility.

Do I have to publish in English?

No. You can publish in your own language, but communication within the fellowship is in English.

Is there an age limit?

Anyone above 18 is welcome to apply.

Can I apply although I’m not a journalist?

Yes.

Can I apply if I’m a student?

Yes.

Can I apply if I’m working as staff in a newsroom?

Yes, but make sure that the fellowship is compatible with your work and your media’s agreements.

Can I apply if I do not have a work permit (e.g. asylum-seeker)?

Yes, but you should check that you are allowed to take part in such a program.

What countries are EU members, former members, candidates or EFTA coutries?

Albania, Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Liechtenstein, Luxembourg, Malta, Moldova, Montenegro, Netherlands, North Macedonia, Norway, Serbia, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye, Ukraine, United Kingdom.

Can I apply if I live outside of these countries?

No.

Can I apply if I’m a national of these countries but don’t live there at the moment?

No.

Can I work on stories besides the main investigation during the fellowship?

We are open to working with the fellows on other stories for an additional remuneration. These would be discussed on a case-by-case basis. The fellowship is not exclusive to other reporting grants or programmes.

Will AlgorithmWatch reimburse the travel expenses I incur?

This can also be discussed on a case-by-case basis if needed.

AlgorithmWatch is an advocacy organization. Will I have to do advocacy?

No. Reporting and advocacy are separated activities.

The Algorithmic Accountability Reporting fellowship is supported by:



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial intelligence provides opportunities for stronger doctor-patient relationships, preventative care – The Daily Item

Published

on



Artificial intelligence provides opportunities for stronger doctor-patient relationships, preventative care  The Daily Item



Source link

Continue Reading

AI Research

Mississippi’s AI Ambitions Take Shape With NVIDIA Partnership

Published

on


Mississippi is carving a new path in its artificial intelligence landscape by partnering with technology company NVIDIA to expand AI education, research and workforce development across the state.

Similar to other NVIDIA partnerships in California, Utah and Oregon, the agreement — formalized through a memorandum of understanding — brings hands-on AI training to Mississippi’s classrooms, colleges and workforce programs statewide. Mississippi residents will now have access to the company’s cloud-based tools and platforms, expanding their available resources and learning capabilities. Educators certified through NVIDIA’s Deep Learning Institute will provide training, offered through state colleges, universities and community-based sites.

The teaming, announced June 18, connects NVIDIA, the Mississippi Artificial Intelligence Network (MAIN), the Mississippi Development Authority, AccelerateMS and the Mississippi governor’s office.


In a statement, Gov. Tate Reeves said working with NVIDIA is a major step forward for Mississippi’s future and emphasized the collaboration’s potential to reshape the state’s economy.

“By expanding AI education, investing in workforce development and encouraging innovation, we, along with NVIDIA, are creating a pathway to dynamic careers in AI and cybersecurity for Mississippians,” Reeves said. “These are the in-demand jobs of the future — jobs that will change the landscape of our economy for generations to come.”

That future-focused vision resonated with NVIDIA, which had already established ties in Mississippi through earlier work with Mississippi State University. Once company officials learned about the coordinated AI work happening across the state, MAIN Director Kollin Napier said, they saw a chance to play a larger role.

“The opportunity to partner with NVIDIA served to amplify our mission,” Napier said. “Mississippi stood out because of the strong foundation built through MAIN, which had already reached thousands of learners through free, accessible AI training programs and established a model of coordination across education, workforce and industry.”

The partnership aims to strengthen classroom instruction, but it goes beyond traditional education. The state plans to roll out certifications, hands-on workshops and job-aligned curricula to help learners gain practical skills in AI, machine learning and data science. This includes training opportunities not only at four-year universities, but at technical colleges and workforce hubs.

A major priority of the initiative is expanding access to these opportunities. The trainings are designed to reach rural communities and underserved populations, teaching students and educators to build career pathways for those traditionally excluded from high-tech industries.

At the same time, the state will be a testing ground for AI-powered innovation. Joint research projects between MAIN and other industry partners will explore real-world AI applications across key sectors including agriculture, health care, energy and national security.

In order for these innovations to have a tangible economic impact, Mississippi is also working with NVIDIA to help local businesses adopt AI tools, introducing machine learning and data technologies into their operations to boost productivity and competitiveness, according to Napier.

“Over the next year, these efforts will scale into precision agriculture training at community colleges and live demonstrations for producers statewide,” he said, citing an example in the farming sector: AI, he said, is helping optimize yields, monitor soil and water conditions, and detect crop disease earlier — benefits that could significantly improve outcomes in Mississippi’s rural areas.

For NVIDIA, these kinds of real-world applications underscore why Mississippi stood out as a strong partner. The state’s commitment to blending education, workforce development and industry engagement directly supported the company’s broader goals.

“Together, we will enhance economic growth through an AI-skilled workforce, advanced research and industry engagement, positioning Mississippi as a hub for AI-driven transformation to the benefit of its communities,” Louis Stewart, head of strategic initiatives for NVIDIA’s global developer ecosystem, said.

That plan aligns with Mississippi’s broader strategy to not only prepare talent but to attract high-tech investment.

“The state is building a workforce that is skilled, certified and ready to support innovation, which positions Mississippi as a serious contender for attracting tech companies and AI-driven industries,” Napier said. “This initiative is not just about preparing people for jobs; it is about bringing the jobs to Mississippi.”

The state, he said, is not merely focused on building skills — but also on building trust. Napier said leaders are weaving ethics, cybersecurity and privacy into every part of their AI ecosystem. MAIN’s programs teach not only the technical side of AI, but how to design systems responsibly, reduce bias and understand the broader impacts of emerging technologies.

To track how things are going, the state is keeping an eye on workforce milestones like certifications earned, job placements and participation in training. But real success will be measured not just by those numbers, Napier said, but by the doors that open for people as a result.

With partnerships across government, education and industry, he said he hopes Mississippi will become a model for how other states can approach AI, not just with bold goals but with purpose, coordination and real impact.

Through MAIN, and by working with NVIDIA, Napier said, the state is “modeling how to bring AI opportunities to every corner of a population, including rural and underserved communities,” and demonstrating “what it looks like to lead with purpose, align across sectors, and build an AI ecosystem that is ethical, inclusive and built for long-term impact.”





Source link

Continue Reading

AI Research

Supercharge your AI workflows by connecting to SageMaker Studio from Visual Studio Code

Published

on


AI developers and machine learning (ML) engineers can now use the capabilities of Amazon SageMaker Studio directly from their local Visual Studio Code (VS Code). With this capability, you can use your customized local VS Code setup, including AI-assisted development tools, custom extensions, and debugging tools while accessing compute resources and your data in SageMaker Studio. By accessing familiar model development features, data scientists can maintain their established workflows, preserve their productivity tools, and seamlessly develop, train, and deploy machine learning, deep learning and generative AI models.

In this post, we show you how to remotely connect your local VS Code to SageMaker Studio development environments to use your customized development environment while accessing Amazon SageMaker AI compute resources.

The local integrated development environment (IDE) connection capability delivers three key benefits for developers and data scientists:

  • Familiar development environment with scalable compute: Work in your familiar IDE environment while harnessing the purpose-built model development environment of SageMaker AI. Keep your preferred themes, shortcuts, extensions, productivity, and AI tools while accessing SageMaker AI features.
  • Simplify operations: With a few clicks, you can minimize the complex configurations and administrative overhead of setting up remote access to SageMaker Studio spaces. The integration provides direct access to Studio spaces from your IDE.
  • Enterprise grade security: Benefit from secure connections between your IDE and SageMaker AI through automatic credentials management and session maintenance. In addition, code execution remains within the controlled boundaries of SageMaker AI.

This feature bridges the gap between local development preferences and cloud-based machine learning resources, so that teams can improve their productivity while using the features of Amazon SageMaker AI.

Solution overview

The following diagram showcases the interaction between your local IDE and SageMaker Studio spaces.

The solution architecture consists of three main components:

  • Local computer: Your development machine running VS Code with AWS Toolkit extension installed.
  • SageMaker Studio: A unified, web-based ML development environment to seamlessly build, train, deploy, and manage machine learning and analytics workflows at scale using integrated AWS tools and secure, governed access to your data.
  • AWS Systems Manager: A secure, scalable remote access and management service that enables seamless connectivity between your local VS Code and SageMaker Studio spaces to streamline ML development workflows.

The connection flow supports two options:

  • Direct launch (deep link): Users can initiate the connection directly from the SageMaker Studio web interface by choosing Open in VS Code, which automatically launches their local VS Code instance.
  • AWS Toolkit connection: Users can connect through AWS Toolkit extension in VS Code by browsing available SageMaker Studio spaces and selecting their target environment.

In addition to the preceding, users can also connect to their space directly from their IDE terminal using SSH. For instructions on connecting using SSH, refer to documentation here.

After connecting, developers can:

  • Use their custom VS Code extensions and tools
  • Remotely access and use their space’s storage
  • Run their AI and ML workloads in SageMaker compute environments
  • Work with notebooks in their preferred IDE
  • Maintain the same security parameters as the SageMaker Studio web environment

Solution implementation

Prerequisites

To try the remote IDE connection, you must meet the following prerequisites:

  1. You have access to a SageMaker Studio domain with connectivity to the internet. For domains set up in VPC-only mode, your domain should have a route out to the internet through a proxy, or a NAT gateway. If your domain is completely isolated from the internet, see Connect to VPC with subnets without internet access for setting up the remote connection. If you do not have a Studio domain, you can create one using the quick setup or custom setup option.
  2. You have permissions to update the SageMaker Studio domain or user execution role in AWS Identity and Access Management (IAM).
  3. You have the latest stable VS Code with Microsoft Remote SSH (version 0.74.0 or later), and AWS Toolkit extension (version v3.68.0 or later) installed on your local machine. Optionally, if you want to connect to SageMaker spaces directly from VS Code, you should be authenticated to access AWS resources using IAM or AWS IAM Identity Center credentials. See the administrator documentation for AWS Toolkit authentication support.
  4. You use compatible SageMaker Distribution images (2.7+ and 3.1+) for running SageMaker Studio spaces, or a custom image.
  5. If you’re initiating the connection from the IDE, you already have a user profile in the SageMaker Studio domain you want to connect to, and the spaces are already created using the Studio UI or through APIs. The AWS Toolkit does not allow creation or deletion of spaces.

Set up necessary permissions

We’ve launched the StartSession API for remote IDE connectivity. Add the sagemaker:StartSession permission to your user’s role so that they can remotely connect to a space.

For the deep-linking experience, the user starts the remote session from the Studio UI. Hence, the domain default execution role, or the user’s execution role should allow the user to call the StartSession API. Modify the permissions on your domain or user execution role by adding the following policy statement:

{
    "Version": "2012-10-17", 
    "Statement": [
        {
            "Sid": "RestrictStartSessionOnSpacesToUserProfile",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": "arn:*:sagemaker:${aws:Region}:${aws:AccountId}:space/${sagemaker:DomainId}/*",
            "Condition": {
                "ArnLike": {
                    "sagemaker:ResourceTag/sagemaker:user-profile-arn": "arn:*:sagemaker:${aws:Region}:${aws:AccountId}:user-profile/${sagemaker:DomainId}/${sagemaker:UserProfileName}"
                }
            }
        }
    ]
}

If you’re initializing the connection to SageMaker Studio spaces directly from VS Code, your AWS credentials should allow the user to list the spaces, start or stop a space, and initiate a connection to a running space. Make sure that your AWS credentials allow the following API actions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sagemaker:ListSpaces",
                "sagemaker:DescribeSpace",
                "sagemaker:UpdateSpace",
                "sagemaker:ListApps",
                "sagemaker:CreateApp",
                "sagemaker:DeleteApp",
                "sagemaker:DescribeApp",
                "sagemaker:StartSession",
                "sagemaker:DescribeDomain",
                "sagemaker:AddTags"
            ],
            "Resource": "*"
        }
    ]
}

This initial IAM policy provides a quick-start foundation for testing SageMaker features. Organizations can implement more granular access controls using resource Amazon Resource Name (ARN) constraints or attribute-based access control (ABAC). With the introduction of the StartSession API, you can restrict access by defining space ARNs in the resource section or implementing condition tags according to your specific security needs, as shown in the following example.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowRemoteAccessByTag",
            "Effect": "Allow",
            "Action": [
                "sagemaker:StartSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/User": 
                }
            }
        }
    ]
}

Enable remote connectivity and launch VS Code from SageMaker Studio

To connect to a SageMaker space remotely, the space must have remote access enabled.

  1. Before running a space on the Studio UI, you can toggle Remote access on to enable the feature, as shown in the following screenshot.

  1. After the feature is enabled, choose Run space to start the space. After the space is running, choose Open in VS Code to launch VS Code.

  1. The first time you choose this option, you’ll be prompted by your browser to confirm opening VS Code. Select the checkbox Always allow studio to confirm and then choose Open Visual Studio Code.

  1. This will open VS Code, and you will be prompted to update your SSH configuration. Choose Update SSH config to complete the connection. This is also a one-time setup, and you will not be prompted for future connections.

  1. On successful connection, a new window launches that is connected to the SageMaker Studio space and has access to the Studio space’s storage.

Connect to the space from VS Code

Using the AWS Toolkit, you can list the spaces, start, connect to a space, or connect to a running space that has remote connection enabled. If a running space doesn’t have remote connectivity enabled, you can stop the space from the AWS Toolkit and then select the Connect icon to automatically turn on remote connectivity and start the space. The following section describes the experience in detail.

  1. After you’re authenticated into AWS, from AWS Toolkit, access the AWS Region where your SageMaker Studio domain is. You will now see a SageMaker AI section. Choose the SageMaker AI section to list the spaces in your Region. If you’re connected using IAM, the toolkit lists the spaces across domains and users in your Region. See the [Optional] Filter spaces to a specific domain or user below on instructions to view spaces for a particular user profile. For Identity Center users, the list is already filtered to display only the spaces owned by you.

  1. After you identify the space, choose the connectivity icon as shown in the screenshot below to connect to the space.

Optional: Filter spaces to a specific domain or user

When connecting to an account using IAM, you will see a list of spaces in the account and region. This can be overwhelming if the account has tens or hundreds of domains, users and spaces. The toolkit provides a filter utility that helps you quickly filter the list of spaces to a specific user profile or a list of user profiles.

  1. Next to SageMaker AI, choose the filter icon as shown in the following screenshot.

  1. You will now see a list of user profiles and domains. Scroll through the list or enter user profile or domain name, and then select or unselect to filter the list of spaces by domain or user profile.

Use cases

Following use cases demonstrate how AI developers and machine learning (ML) engineers can use local integrated development environment (IDE) connection capability.

Connecting to a notebook kernel

After you’re connected to the space, you can start creating and running notebooks and scripts right from your local development environment. By using this method, you can use the managed infrastructure provided by SageMaker for resource-intensive AI tasks while coding in a familiar environment. You can run notebook cells on your SageMaker Distribution or custom image kernels, and can choose the IDE that maximizes your productivity. Use the following steps to create and connect your notebook to a remote kernel –

  1. On your VS Code file explorer, choose the plus (+) icon to create a new file, name it remote-kernel.ipynb.
  2. Open the notebook and run a cell (for example, print ("Hello from remote IDE"). VS Code will show a pop-up for installing the Python and Jupyter extension.
  3. Choose Install/Enable suggested extensions.
  4. After the extensions are installed, VS Code will automatically launch the kernel selector. You can also choose Select Kernel on the right to view the list of kernels.

For the next steps, follow the directions for the space you’re connected to.

Code Editor spaces:

  1. Select Python environments… and choose from a list of provided Python environments. After you are connected, you can start running the cells in your notebook.

JupyterLab spaces:

  1. Select the Existing Jupyter Server… option to have the same kernel experience as the JupyterLab environment.
    If this is the first time connecting to JupyterLab spaces, you will need to configure the Jupyter server to view the same kernels as the remote server using the following steps.
    1. Choose Enter the URL of the running Jupyter Server and enter http://localhost:8888/jupyterlab/default/lab as the URL and press Enter.
    2. Enter a custom server display name, for example, JupyterLab Space Default Server and press Enter.You will now be able to view the list of kernels that’s available on the remote Jupyter server. For consequent connections, this display name will be available for you to choose from when you select the existing Jupyter server option.

The following graphic shows the entire workflow. In this example, we’re running a JupyterLab space with the SageMaker Distribution image, so we can view the list of kernels available in the image.

You can choose the kernel of your choice, for example, the Python 3 kernel, and you can start running the notebook cells on the remote kernel. With access to the SageMaker managed kernels, you can now focus on model development rather than infrastructure and runtime management, while using the development environment you know and trust.

Best practices and guardrails

  1. Follow the principle of least privilege when allowing users to connect remotely to SageMaker Studio spaces applications. SageMaker Studio supports custom tag propagation, we recommend tagging each user with a unique identifier and using the tag to allow the StartSession API to only their private applications.
  2. As an administrator, if you want to disable this feature for your users, you can enforce it using the sagemaker:RemoteAccess condition key. The following is an example policy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowCreateSpaceWithRemoteAccessDisabled",
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
                ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "sagemaker:RemoteAccess": [
                        "DISABLED"
                    ]
                }
            }
        },
        {
            "Sid": "AllowCreateSpaceWithNoRemoteAccess",
            "Effect": "Allow",
            "Action":  [
                "sagemaker:CreateSpace",
                "sagemaker:UpdateSpace"
                ],
            "Resource": "*",
            "Condition": {
                "Null": {
                    "sagemaker:RemoteAccess": "true"
                }
            }
        }
    ]
}

  1. When connecting remotely to the SageMaker Studio spaces from your local IDE, be aware of bandwidth constraints. For optimal performance, avoid using the remote connection to transfer or access large datasets. Instead, use data transfer methods built for cloud and in-place data processing to facilitate a smooth user experience. We recommend an instance with at least 8 GB of storage to start with, and the SageMaker Studio UI will throw an exception if you choose a smaller instance.

Cleanup

If you have created a SageMaker Studio domain for the purposes of this post, remember to delete the applications, spaces, user profiles, and the domain. For instructions, see Delete a domain.

For the SageMaker Studio spaces, use the idle shutdown functionality to avoid incurring charges for compute when it is not in use.

Conclusion

The remote IDE connection feature for Amazon SageMaker Studio bridges the gap between local development environments and powerful ML infrastructure of SageMaker AI. With direct connections from local IDEs to SageMaker Studio spaces, developers and data scientists can now:

  • Maintain their preferred development environment while using the compute resources of SageMaker AI
  • Use custom extensions, debugging tools, and familiar workflows
  • Access governed data and ML resources within existing security boundaries
  • Choose between convenient deep linking or AWS Toolkit connection methods
  • Operate within enterprise-grade security controls and permissions

This integration minimizes the productivity barriers of context switching while facilitating secure access to SageMaker AI resources. Get started today with SageMaker Studio remote IDE connection to connect your local development environment to SageMaker Studio and experience streamlined ML development workflows using your familiar tools while the powerful ML infrastructure of SageMaker AI.


About the authors


Durga Sury
 is a Senior Solutions Architect at Amazon SageMaker, where she helps enterprise customers build secure and scalable AI/ML systems. When she’s not architecting solutions, you can find her enjoying sunny walks with her dog, immersing herself in murder mystery books, or catching up on her favorite Netflix shows.

Edward Sun is a Senior SDE working for SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solution and simplifying the customer experience to integrate SageMaker Studio with popular technologies in data engineering and ML landscape. In his spare time, Edward is big fan of camping, hiking, and fishing, and enjoys spending time with his family.

Raj Bagwe is a Senior Solutions Architect at Amazon Web Services, based in San Francisco, California. With over 6 years at AWS, he helps customers navigate complex technological challenges and specializes in Cloud Architecture, Security and Migrations. In his spare time, he coaches a robotics team and plays volleyball. He can be reached at X handle @rajesh_bagwe.

Sri Aakash Mandavilli is a Software Engineer on the Amazon SageMaker Studio team, where he has been building innovative products since 2021. He specializes in developing various solutions across the Studio service to enhance the machine learning development experience. Outside of work, SriAakash enjoys staying active through hiking, biking, and taking long walks.



Source link

Continue Reading

Trending