Connect with us

Tools & Platforms

Page not found – Lancaster City Council

Published

on




We show this page when a page cannot be found in our website. It might be that a link has changed or is no longer available.

The page or document you tried to view could not be located. Our new website has changed some of the links so if you’ve bookmarked something you might need to find it again.

We apologise for the inconvenience.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Proposed SANDBOX Act May Remove AI Oversight for Developers

Published

on


Proposed federal legislation known as the SANDBOX Act, introduced on Wednesday, would grant AI developers regulatory lenience to launch new technologies — but some experts argue that the bill poses risks to consumers’ privacy.

Governments are increasingly exploring the sandbox model to allow for AI exploration in a secure environment, from Massachusetts to Delaware and beyond. In Utah, regulatory mitigation agreements with businesses allow for temporary relaxation of laws to develop new technologies, although data sharing, safety and compliance measures are in place.

The SANDBOX Act proposed this week by Sen. Ted Cruz — a.k.a. the Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and eXperimentation Act — aims to do this at the federal level, establishing an AI regulatory sandbox program through the U.S. Office of Science and Technology Policy (OSTP).


Under this bill, AI deployers and developers would apply to modify or waive regulations, to more efficiently advance their work to launch new AI technologies. The bill would essentially offer select companies eligibility for two years of regulatory exemptions. OSTP would work across federal agencies to evaluate such requests, and the U.S. Congress would collect regular reports on how often rules were modified or waived to inform policymaking. The legislation aims to help position the U.S. as a leader in AI, which is a federal priority.

“[The SANDBOX Act] embraces our nation’s entrepreneurial spirit and gives AI developers the room to create while still mitigating any health or consumer risks,” Cruz said in a statement.

Stakeholders in responsible AI advancement, however, have raised concerns about the proposed legislation.

Public Citizen, a nonprofit consumer rights advocacy group, said that it “puts public safety on the chopping block in favor of corporate immunity.” The group released a statement from its accountability advocate J.B. Branch about the bill.

“Public safety should never be made optional, but that’s exactly what the SANDBOX Act does,” Branch said. “It guts basic consumer protections, lets companies skirt accountability, and treats Americans as test subjects.”

While proponents of regulatory amendments argue that AI companies are being restricted by these rules, Branch said that this is “simply not true,” citing company value assessments.

The CEO of the Alliance for Secure AI, Brendan Steinhauser, argued in a statement that Big Tech companies have repeatedly failed to make safety and harm prevention top priorities.

“The SANDBOX Act removes much-needed oversight as Big Tech refuses to remain transparent with the public about the risks of advanced AI,” he said, questioning who will be allowed to enter this sandbox environment and why.

Other groups, like the Information Technology Industry Council and the Abundance Institute, support this legislation.

This bill comes on the heels of much division about the future of AI regulation — and who holds the authority to implement safeguards.

There is bipartisan agreement among the public that both states and the federal government should be able to regulate AI. But the federal government has attempted to block states’ regulatory authority through a proposed moratorium in a recent budget bill, which was ultimately rejected by Congress; and more recently in the AI Action Plan, which could threaten states’ access to federal funding over their regulatory policies.

There is also bipartisan agreement on enacting some basic AI regulatory protections, such as a ban on lethal autonomous weapons and requiring AI programs to pass a government test before use.

“No federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted,” according to a June Congressional Research Service report.





Source link

Continue Reading

Tools & Platforms

GITEX GLOBAL Brings Global AI Leaders to Egypt Ahead of Ai Everything MEA 2026

Published

on


Mohammedia – Egypt has taken  a major step in its AI journey with an exclusive launch event for Ai Everything Middle East & Africa Egypt at the historic Sultan Hussein Kamel Palace in Cairo. 

The event, organised by GITEX GLOBAL and hosted by Egypt’s Ministry of Communications and Information Technology (MCIT) in partnership with the Information Technology Industry Development Agency (ITIDA), brought together senior government officials, global tech executives, AI innovators, media, and startup representatives. 

The launch sets the stage for the main event, Ai Everything MEA Egypt 2026, scheduled for 11-12 February 2026.

The event showcased Egypt’s goal of generating $42.7 billion in annual AI value by 2030 and establishing Cairo as a hub for global AI collaboration. Discussions focused on how Ai Everything MEA Egypt includes international expertise and  Egypt’s National AI Strategy 2025-2030. 

Many of Egypt’s strengths in the tech industry were highlighted as key advantages for growing its AI ecosystem: outsourced digital services, semiconductors, electronic design, public sector transformation, startup innovation, and attracting global investments.

Eng. Ahmed Elzaher, CEO of ITIDA, opened the event by emphasizing that, “AI today is no longer a trend; it is a core driver of economic and societal transformation. Hosting Ai Everything MEA Egypt is part of Egypt’s mission to remain at the forefront of the global technology revolution. This summit cements our position as a regional hub for innovation and trusted global partner in the AI era.”

Trixie LohMirmand, EVP of Dubai World Trade Centre and CEO of KAOUN International, added “AI will be the backbone of Egypt’s economic transformation.” 

“Our goal with Ai Everything MEA is to empower both the public and private sectors, as well as young talent and startups, to shape the country’s AI future,” she continued.

Egypt’s Minister of Communications and Information Technology, H.E. Dr. Amr Talaat, noted that the country’s selection to host Ai Everything MEA Egypt reflects international recognition of Egypt’s progress in artificial intelligence. 

“Since launching our first National AI Strategy in 2019, Egypt has advanced 46 places in the global AI Readiness Index,” he said.

“The updated strategy focuses on six pillars, including wider access to computing resources, stronger data governance, AI systems to boost growth, digital skills, public awareness, and a solid regulatory framework,” he added.

 Read Also: Founders, Innovators Put Africa’s Tech Future on Full Display at GITEX Nigeria 2025

The launch also featured a panel discussion on “Egypt’s AI Future,” with leaders from IBM, HPE, Deloitte Innovation Hub, WideBot AI, Intella, and Plug & Play Tech Centre. Speakers shared insights on scaling startups, improving public-private partnerships, and raising Egypt’s global competitiveness in AI. 

Marwa Abbas from IBM highlighted how AI tools like IBM watsonx are helping Egyptian businesses accelerate digital transformation, while HPE’s Mohamed Wasfy noted that Egypt now hosts some of the world’s most energy-efficient AI systems.

CEOs of Egyptian AI startups, including WideBot AI and Intella, discussed recent funding successes and strategies to grow their businesses internationally.

Ai Everything MEA Egypt 2026, taking place at the NCIEC in Cairo, will host AI experts, startups, investors, policymakers, and global enterprises from 60 countries. 

The event features discussions on next-generation AI infrastructure, responsible scaling, semiconductors, cybersecurity, digital health, fintech, and startup-investor networking. It aims to attract global investment and reinforce Cairo as the Middle East and Africa’s AI innovation center.



Source link

Continue Reading

Tools & Platforms

Making Generative AI Work for Everyone in a Factory Setting

Published

on


In our last discussion, we framed Industrial AI as a comprehensive toolbox filled with specialized instruments. We argued that generative AI, for all its power, is the newest tool in this box, not a replacement for the entire workshop. Now, it’s time to examine that new tool more closely. To truly leverage its potential, we must move beyond the generalized hype and understand its specific strengths and weaknesses in the demanding industrial environment.

From our research and conversations with manufacturers across the globe, two primary high-impact applications for generative AI have clearly emerged. The first is its revolutionary role as a new type of user interface. The second is its unprecedented ability to unlock knowledge from the vast sea of unstructured data that permeates every factory.

The Rise of the “Gen UI”: AI as a Universal Translator

Perhaps the most immediate and profound impact of generative AI in industry is its function as a “generative user interface” or “Gen UI.” For decades, interacting with complex industrial software and data systems required specialized training. Engineers needed to learn specific query languages to pull data from a historian; operators had to navigate complex, menu-driven screens on a human-machine interface (HMI); maintenance staff had to know exactly where to find a specific manual in a labyrinthine document management system.

The Gen UI changes everything. It provides a conversational, natural language layer that sits between the human user and these complex backend systems. It acts as a universal translator, radically lowering the barrier to entry for accessing critical information.

The pro: radical accessibility. With a Gen UI, a process engineer can simply ask, “Show me the pressure and temperature trends for Reactor 4 during the last production run of Product XYZ and flag any anomalies.” A junior maintenance technician can ask their handheld device, “Walk me through the standard lockout-tagout procedure for the main conveyor belt motor.” This democratization of data and knowledge is a paradigm shift, empowering a much broader range of employees to make faster, better-informed decisions.

The con: the persuasive lie. Herein lies the danger. Large language models (LLMs) are designed for fluency and are masters of probability, not truth. They can “hallucinate”—producing an answer that is grammatically perfect, highly confident and completely wrong. In a consumer setting, this is an annoyance. In a factory, a confidently delivered but incorrect answer about a safety procedure, an asset’s operating limit or a chemical mixture could be catastrophic.

The solution: grounding in reality. A Gen UI cannot be deployed in an industrial setting without being strictly “grounded” in the company’s own factual data. Using a technique called retrieval-augmented generation (RAG), the system is architected so the LLM doesn’t invent answers. Instead, it first retrieves verified information from trusted enterprise sources—a data historian, a maintenance database or an approved document library. The LLM’s role is then limited to translating the user’s question, understanding the retrieved facts and formatting the correct answer in natural language. This grounding in a factual data architecture, like an industrial data fabric, is the essential safety rail that makes the Gen UI viable for industry. Even then, LLM is not 100% accurate. Language nuances can be misinterpreted and lead to inaccurate responses in the process.

Taming the Document Tsunami with Unstructured Data

The second game-changing application for GenAI is taming the document tsunami. Our research at ARC shows that for many enterprises, as much as 80% of their data is “unstructured”—locked away in formats that are difficult for traditional analytics to parse. Factories run on this data: PDF operating manuals, P&ID schematics, environmental compliance reports, maintenance work orders and operator logbooks. For decades, the immense knowledge trapped in these documents has been largely inaccessible at scale.

The pro: unlocking trapped knowledge. LLMs are uniquely suited to ingest, index and understand this massive corpus of text. This unlocks decades of invaluable, hard-won operational knowledge. For the first time, organizations can ask complex questions across their entire document library: “Analyze all maintenance comments from the last five years for our compressor fleet and identify the most common precursor to failure.” Or, “Does our current operating procedure for Line 3 comply with the environmental regulations outlined in this 200-page permit?”

The con: the governance nightmare. This power comes with significant risks that must be managed:

Version control: How do you guarantee the AI is referencing the latest approved engineering drawing and not an obsolete draft? The system’s knowledge base must be rigorously managed to prevent outdated information from causing errors or safety incidents.

Intellectual property: Using public LLM APIs could mean sending sensitive, proprietary operational data or product information to a third-party cloud. For most industrial companies, this is a non-starter. The solution requires deploying models within a private, secure cloud or on-premise environment.

Access control: Not every employee should see every document. The GenAI system must be integrated with existing enterprise access controls to ensure that users can only get answers from data they are authorized to view.

Generative AI is not a magic bullet, but it is a profoundly valuable addition to the Industrial AI toolbox. Its true power today is unlocked when we see it for what it is: a revolutionary interface that makes other systems easier to use, and a powerful processor for unlocking the value of unstructured text. When implemented thoughtfully, with the guardrails of grounding and governance, it bridges the gap between complex systems and human ingenuity.

But this raises a new question. Now that we have this powerful new conversational tool, how do we make it work in concert with all the other specialized tools in our box? The answer lies in AI agents, the topic of our final article in this series.



Source link

Continue Reading

Trending