Connect with us

Tools & Platforms

Tribal technology conference kicks off Monday with focus on hospitality, cybersecurity, and AI — CDC Gaming

Published

on


The 26th annual TribalNet Conference & Tradeshow kicks off Monday in Reno. This year’s event has a heavy focus on gaming and hospitality technology on the first day, then a week-long emphasis on cybersecurity.

The conference at the Grand Sierra Resort runs through Thursday. It attracts IT professionals, gaming and hospitality executives, and others within tribal government operations, who discuss transformational technologies.

Cybersecurity has been a big focus in Nevada, which sustained a ransomware attack in late August. It impacted state offices, websites, and services and forced temporary, but ongoing, closures of offices.

Cyberattacks continue to plague tribal gaming operations. Since the pandemic, tribal casinos around the country have been temporarily shuttered due to the attacks.

“Plenty of attacks continue to cause issues in the cyber world,” said Mike Day, founder and executive director of TribalHub, which puts on the conference. “We’ve integrated best practices of what tribes are doing and we’re watching our Tribal ISAC (The Tribal Information Sharing and Analysis Center) grow, which is all about cybersecurity of cyber professionals by tribes for tribes. That communication among tribes is a game changer. They’re sharing information about threats much more quickly.”

The threat of cyberattacks is getting more complicated with the progression of artificial intelligence, Day said. These include impersonations of executives and identity theft aided by AI. Phishing attempts are more difficult to detect.

“A lot of people are rebranding well-known brands in their phishing attempts and these attacks are devastating,” Day said. “There are new ways of having to think about how to protect your employees and organization. No one is immune from this – governments, companies, and individuals.

The gaming and hospitality track has four sessions, three on Monday: cashless wallets and best practices to manage and succeed; what’s new with casino gaming systems; how to create the best customer digital experience; and emerging technology in gaming and hospitality and what the future may bring.

Panelists represent gaming-system leaders at Aristocrat, IGT, Light & Wonder, and CasinoTrac.

“We have the big gaming-systems companies here and we’re talking about what they’re doing to prepare casinos for the future,” Day said. “We’re asking them some AI and cybersecurity questions as well; they’re important for helping organizations drive new revenue. Technology is a critical piece of all your operations. If you’re more efficient and saving money in some way, it’s probably got a huge technology component. If you’re making new money, it almost assuredly has a huge technology component to it. That’s the message we’re trying to get across.

“People need to think about technology differently. It’s not just something happening in the back room adding up numbers,” Day said. “It’s driving revenue and saving money. It didn’t always do that. That’s why it’s important to have a strategic technology plan, whether you’re a CEO or CIO or any of the leaders from gaming and hospitality organizations.”

TribalNet is expecting its largest attendance in history and largest tradeshow floor ever, Day said. People are recognizing that it’s not just an information technology conference, but an event that’s driving where their organizations are going in the future.

More than 700 people are expected to attend, along with nearly 250 exhibitors. Combined, there will be 1,700 to 1,800 people or more at TribalNet.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Implementing Next-Generation AI for Impact

Published

on

By


AI in Finance 2025, will take place at the iconic Kimpton Fitzroy London on Wednesday 26th November, and will provide a roadmap for 150+ senior technology leaders from financial services to move beyond the well-established use cases and implement next-generation AI at scale.

Organised by the team behind London Tech Week, the event features an impressive lineup of speakers who are directly responsible for driving their organisation’s AI strategy.

Headline speakers include:

  • Dara Sosulski, Managing Director & Head of Artificial Intelligence and Model Management, HSBC
  • Christoph Rabenseifner, Chief Strategy Officer for Technology, Data and Innovation, Deutsche Bank
  • Kirsten Mycroft, Chief Privacy & Responsible AI Officer, BNY
  • Morgane Peng, Managing Director, Head of Product Design & AI Lead, Societe Generale
  • Nitin Kulkarni, CIO for Data Platforms, Data Engineering, and AI Centre of Excellence, Nationwide Building Society
  • Elena Strbac, Managing Director, Global Head of Data Science & Innovation, Standard Chartered
  • Neil Boston, Co-Group Head of Emerging Technology, UBS

These speakers (and others) will be discussing only the most business-critical AI topics including implementing and scaling impactful agentic AI, creating hyper-personalised customer experiences and driving enterprise-wide adoption.

Attendees can expect to meet senior AI, Technology, Data & Analytics leaders from leading financial services institutions across the UK and Europe – including HSBC, J.P. Morgan, Citi, Starling Bank, Barclays, and others.

Key Highlights:

  • Hear from tech leaders driving their company’s most advanced AI projects; specifically, the success stories and key lessons learned from their AI journey to date.   
  • Discover the latest cutting-edge AI solutions that can help address your unique business needs
  • Build connections with fellow AI leaders in financial services via our interactive-first set up (multiple focused breakouts, an app to facilitate onsite meetings & onstage Q+A and live polling)
  • Be exposed to fresh ideas from foreign banks and leaders in other highly regulated industries on how to tackle common AI challenges

AI in Finance 2025 promises to be a pivotal gathering for professionals across banking, fintech, and investment sectors, offering actionable strategies to harness AI for competitive advantage.

Registration: Discover the full speaker line up, agenda & range of registration options via our brochure today!



Source link

Continue Reading

Tools & Platforms

Quantum, Blockchain, and Key Challenges

Published

on

By


The Surge of AI in Financial Services

In the rapidly evolving world of financial technology, artificial intelligence is not just a tool but a transformative force reshaping how banks and insurers operate. According to a detailed report in the Financial Times, AI-driven innovations are accelerating decision-making processes, from risk assessment to customer personalization, with major players like JPMorgan Chase investing billions in machine learning algorithms that predict market shifts with unprecedented accuracy. This shift is driven by the need to handle vast data volumes in real time, where traditional methods fall short.

Beyond banking, AI’s integration with edge computing is enabling instant actions in remote operations, as highlighted in posts on X from tech analysts who note its role in reducing latency for fraud detection. For instance, systems that process transactions at the point of sale are cutting fraud losses by up to 30%, according to insights shared by industry observers on the platform, emphasizing how this tech duo is becoming indispensable for secure, efficient financial ecosystems.

Quantum Computing’s Disruptive Potential

The rise of quantum computing represents another seismic shift, promising to solve complex financial models that classical computers struggle with. A Forbes Council post from April 2025 details how quantum tech could optimize portfolio management by simulating countless scenarios in seconds, a capability that’s drawing heavy investment from firms like Goldman Sachs. This trend aligns with broader industry moves toward advanced analytics, where quantum’s power addresses longstanding challenges in encryption and optimization.

Meanwhile, Capgemini’s TechnoVision 2025 report underscores quantum’s synergy with AI in financial services, forecasting its widespread adoption by 2030. Insiders warn, however, of the cybersecurity risks, as quantum could crack current encryption standards, prompting a race to develop quantum-resistant protocols as discussed in recent web analyses from cybersecurity experts.

Tokenization and Blockchain Innovations

Tokenization of assets is emerging as a key innovation, turning illiquid holdings like real estate into tradable digital tokens on blockchain networks. The Financial Times article explores how this democratizes investment, allowing fractional ownership and faster settlements, with examples from startups partnering with traditional banks to tokenize bonds and equities. This not only boosts liquidity but also reduces intermediary costs, potentially saving the industry trillions annually.

X posts from investment advisors highlight tokenization’s role in decentralized finance, with trends pointing to its integration with renewable energy projects for sustainable funding. Plaid’s insights on fintech trends further elaborate that by 2025, tokenization will underpin new consumer tools, enabling seamless cross-border payments and micro-investments, though regulatory hurdles remain a focal point in ongoing discussions.

Sustainability and Ethical AI Challenges

Sustainability is weaving into tech trends, with AI optimizing energy use in data centers that power financial operations. A McKinsey report from 2025 identifies agentic AI—autonomous systems—as a game-changer for enterprise innovation, helping firms like Visa reduce carbon footprints through smarter resource allocation. Yet, ethical concerns loom large, as biases in AI models could exacerbate inequalities in lending practices.

Web sources, including Outlook Money’s overview of tech trends in finance, stress the importance of robust governance to mitigate these risks. Industry insiders on X echo this, calling for transparent AI frameworks to ensure fair outcomes, especially as telemedicine and mental health apps intersect with financial wellness tools.

Navigating Geopolitical and Talent Gaps

Geopolitical tensions are complicating supply chains for semiconductors critical to these technologies. Smart Sync Investment Advisory Services on X notes the fragility despite massive capital expenditures, with export controls potentially delaying AI hardware advancements. This underscores the need for diversified sourcing strategies in an era where chips power everything from quantum simulations to blockchain ledgers.

Talent shortages in areas like AI design and ethical hacking pose another barrier, as per BigID’s white paper on 2025 tech challenges. Firms are ramping up training programs, but the gap persists, threatening innovation pace. As the Financial Times piece concludes, overcoming these hurdles will define which players thrive in this high-stakes arena, blending cutting-edge tech with strategic foresight to forge resilient financial futures.



Source link

Continue Reading

Tools & Platforms

How to write an AI ethics policy for the workplace

Published

on


This audio is auto-generated. Please let us know if you have feedback.

If there is one common thread throughout recent research about AI at work, it’s that there is no definitive take on how people are using the technology — and how they feel about the imperative to do so.

Language learning models can be used to draft policies, generative AI can be used for image creation, and machine learning can be used for predictive analytics, Ines Bahr, a senior Capterra analyst who specializes in HR industry trends, told HR Dive via email. 

Still, there’s a lack of clarity around which tools should be used and when, because of the broad range of applications on the market, Bahr said. Organizations have implemented these tools, but “usage policies are often confusing to employees,” Bahr told HR Dive via email — which leads to the unsanctioned but not always malicious use of certain tech tools.

The result can be unethical or even allegedly illegal actions: AI use can create data privacy concerns, run afoul of state and local laws and give rise to claims of identity-based discrimination.

Compliance and culture go hand in hand

While AI ethics policies largely address compliance, culture can be an equally important component. If employers can explain the reasoning behind AI rules, “employees feel empowered by AI rather than threatened,” Bahr said. 

“By guaranteeing human oversight and communicating that AI is a tool to assist workers, not replace, a company creates an environment where employees not only use AI compliantly but also responsibly” Bahr added.

Kevin Frechette, CEO of AI software company Fairmarkit, emphasized similar themes in his advice for HR professionals building an AI ethics policy.

The best policies answer two questions, he said: “How will AI help our teams do their best work, and how will we make sure it never erodes trust?”

“If you can’t answer how your AI will make someone’s day better, you’re probably not ready to write the policy,” Frechette said over email.

Many policy conversations, he said, are backward, prioritizing the technology instead of the workers themselves: “An AI ethics policy shouldn’t start with the model; it should start with the people it impacts.”

Consider industry-specific issues

A model of IBM Quantum during the inauguration of Europe’s first IBM Quantum Data Center on Oct. 1, 2024, in Ehningen, Germany. The center provides cloud-based quantum computing for companies, research institutions and government agencies.

Thomas Niedermueller via Getty Images

 

Industries involved in creating AI tools have additional layers to consider: Bahr pointed to research from Capterra that revealed that software vulnerabilities were the top cause of data breaches in the U.S. last year. 

“AI-generated code or vibe coding can present a security risk, especially if the AI model is trained on public code and inadvertently replicates existing vulnerabilities into new code,” Bahr explained. 

An AI disclosure policy should address security risks, create internal review guidelines for AI-generated code, and provide training to promote secure coding practices, Bahr said.

For companies involved in content creation, an AI disclosure could be required and should address how workers are responsible for the final product or outcome, Bahr said.

“This policy not only signals to the general public that human input has been involved in published content, but also establishes responsibilities for employees to comply with necessary disclosures,” Bahr said.

“Beyond fact-checking, the policy needs to address the use of intellectual property in public AI tools,” she said. “For example, an entertainment company should be clear about using an actor’s voice to create new lines of dialogue without their permission.”

Likewise, a software sales representative could be able to explain to clients how AI is used in the company’s products. Customer data use can also be a part of disclosure policy, for example.

The policy’s in place. What now?

Because AI technology is constantly evolving, employers must remain flexible, experts say. 

“A static AI policy will be outdated before the ink dries,” according to Frechette of Fairmarkit. “Treat it like a living playbook that evolves with the tech, the regulations, and the needs of your workforce,” he told HR Dive via email. 

HR also should continue to test the AI policies and update them regularly, according to Frechette. “It’s not about getting it perfect on Day One,” he said. “It’s about making sure it’s still relevant and effective six months later.”



Source link

Continue Reading

Trending