Connect with us

Tools & Platforms

ChatGPT Almost Launched Under a Different Name!

Published

on


The Tech World’s Best Kept Secret

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

In a little-known twist in the AI world, OpenAI’s famous conversational AI, ChatGPT, was almost released under a completely different name. This revelation underscores the significance of branding in the tech industry. Find out what potential names were considered and how they could have changed the AI landscape.

Banner for ChatGPT Almost Launched Under a Different Name!

Introduction to ChatGPT’s Naming

ChatGPT, the renowned AI language model developed by OpenAI, almost debuted with a different name. According to a report by Windows Central, OpenAI considered various names before settling on ‘ChatGPT’, a term that has now become synonymous with AI-driven conversations and assistance. The name ‘ChatGPT’ effectively encapsulates the essence of the technology, standing for ‘Chat Generative Pre-trained Transformer’, which highlights its capabilities in generating human-like text .

OpenAI’s journey with ChatGPT underscores the collaboration and innovation inherent in developing cutting-edge technology. By harnessing vast datasets and employing advanced machine learning techniques, OpenAI crafted a sophisticated conversational agent that continues to evolve. The development process involved numerous iterations and refinements, ensuring that the final product was both efficient and user-friendly. Such meticulous attention to detail is essential in creating technology that not only meets user needs but also sets standards in AI advancements. Further insights on this development can be gathered from a detailed write-up on the topic available here.

The Naming Decision Process

Choosing the right name for a product is a pivotal decision in its lifecycle, as it defines initial public perception and brand identity. In the context of AI products, this decision can significantly impact user engagement and market penetration. OpenAI’s ChatGPT, for instance, almost emerged with a different name. According to an article on Windows Central, the naming process involved rigorous analysis and consideration of multiple factors to ensure alignment with the brand’s vision and user expectations.

The naming decision process typically involves understanding the product’s core functionalities, target audience, and market positioning. Companies often engage in brainstorming sessions, conduct surveys, and perform A/B testing to gauge potential customer reactions to different name options. This strategic approach aims to select a name that resonates well with the intended users and effectively communicates the product’s purpose.

In the case of ChatGPT, the implications of choosing an alternate name could have altered its reception and brand trajectory. Public reactions to AI technologies can be unpredictable, making the choice of a name that accurately reflects the product’s capabilities and qualities crucial. The iterative process involves not only the creative aspect but also aligns with business goals and legal considerations.

Furthermore, expert opinions and industry trends play a significant role in shaping the naming decision. Companies might consult with branding experts who provide insights into how different names are perceived linguistically and culturally. This can prevent potential backlash and ensure the name supports the future growth and acceptance of the product in diverse markets.

Expert Opinions on AI Naming

The process of naming AI technologies often involves a multitude of considerations, blending creativity with strategic marketing insights. Experts frequently emphasize the importance of choosing names that are not only memorable but also reflective of the product’s capabilities. When it comes to AI like ChatGPT, a name can significantly influence public perception and acceptance. According to a detailed article on Windows Central, there was a point when ChatGPT nearly launched under a different name. Such a change could have altered brand recognition and market dynamics, underscoring the nuanced role that expert opinion plays in these crucial decisions.

Expert opinions on AI naming also highlight the intricate balance between technological jargon and user-friendly language. Many argue that a successful AI name should demystify rather than obscure the technology it represents. This approach ensures that potential users feel more connected and less intimidated by the innovation. The case of ChatGPT, as reported by Windows Central, serves as a prime example of how pivotal naming decisions are, capable of impacting user engagement and the technology’s broader societal integration.

In the realm of AI naming conventions, the opinions of branding experts and linguists are often solicited to navigate the complexities involved. As revealed in an analysis by Windows Central, these professionals explore how different names might resonate across cultures and age groups. The future implications of such decisions are vast, potentially affecting not just marketing success but also the ethical considerations associated with AI technologies. This multifaceted approach to naming reflects the current trend towards more inclusive and globally conscious branding strategies.

Public Reactions to Naming Choices

The selection of a name for a product, especially one as transformative as ChatGPT, often invites a myriad of public reactions. When OpenAI unveiled ChatGPT, some potential names were already in circulation within the tech community. Naming is not merely a marketing strategy; it reflects the ethos and intended impact of the technology. Public reactions ranged from excitement to apprehension when discovering the various naming possibilities that were almost chosen. Given that ChatGPT almost shipped with a different name, many might wonder if a different moniker would have influenced its reception and brand identity.

Among the tech-savvy public, the debate over the naming of ChatGPT extended to its brand perception and its potential to define the future of artificial intelligence. The name ChatGPT quickly became synonymous with advanced conversational AI, and the revelation that it could have had a different name sparked colorful discussions on social media. For some, the name ‘ChatGPT’ effectively encapsulates the utility and sophistication of the system, while others speculated that alternative names could have conveyed different facets of its functionality and personality.

The choice of a name is critical, as evidenced by the considerable public discourse surrounding OpenAI’s naming decision. Names resonate on a personal and global level, drawing reactions that can range from enthusiastic acceptance to critical skepticism. The knowledge that another name was almost used adds an intriguing layer to the narrative. It prompts reflection on how deeply names can influence consumer perceptions and the subsequent success of groundbreaking technologies.

Future Implications for AI Branding

The realm of AI branding is set for a transformative evolution, with future implications that extend beyond conventional marketing paradigms. As AI technologies continue to advance and become deeply integrated into everyday life, brands must rethink how they present these innovations to consumers. A notable example in this context is OpenAI’s ChatGPT, which was close to being launched under a different name, illustrating the significant impact branding decisions can have on product perception and market penetration. For more on this development, you can explore the detailed article .



Source link

Tools & Platforms

Searching for boundaries in the AI jungle

Published

on


Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.

He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.

Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.

Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.

Copyrights

One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”

Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.

Personal data

Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”

The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.

“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.

searching-for-boundaries-in-the-ai-jungle2
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.

The team’s activities

Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.

In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.

The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues. 

In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.” 





Source link

Continue Reading

Tools & Platforms

How to start a career in the age of AI – Computerworld

Published

on



How to start a career in the age of AI  Computerworld



Source link

Continue Reading

Tools & Platforms

AI will boost the value of human creativity in financial services, says AWS

Published

on


shomos uddin/Getty Images

Financial services firms are making early gains from artificial intelligence (AI), which is not surprising given that finance is historically an industry that embraces new technologies aggressively.

Also: The AI complexity paradox: More productivity, more responsibilities

One surprising outcome is that AI might end up making the most critical functions of banking, insurance, and trading, or the creative functions that require human insights, even more valuable. 

“What happens is there’s going to be a premium on creativity and judgment that goes into the process,” said John Kain, who is head of market development efforts in financial services for AWS, in an interview with ZDNET via Zoom. 

Also: AI usage is stalling out at work from lack of education and support

By process, he meant those areas that are most advanced, and presumably hardest to automate, such as a bank’s risk calculations.

amazon-aws-2025-john-kain-headshot

Amazon AWS

“So much of what’s undifferentiated will be automated,” said Kaine. “But what that means is what actually differentiates the business and the ability to serve customers better, whether that’s better understanding products or risk, or coming up with new products, from a financial perspective, the pace of that will just go so much more quickly in the future.”

Amazon formed its financial services unit 10 years ago, the first time the cloud giant took an industry-first approach.

For eight years, Kaine has helped bring the cloud giant’s tools to banks, insurers, and hedge funds. That approach includes both moving workloads to the cloud and implementing AI, including the large language models (LLMs) of generative AI (Gen AI), in his clients’ processes. 

“If you look at what we’re trying to do, we’re trying to provide our customers an environment where, from a security, compliance, and governance perspective, we give them a platform that ticks the boxes for everything that’s table stakes for financial services,” said Kaine, “but also gives them the access to the latest technologies, and choice in being able to bring the best patterns to the industry.”

Also: Are AI subscriptions worth it? Most people don’t seem to think so, according to this study

Kaine, who started his career in operations on the trading floor, and worked at firms such as JP Morgan Chase and Nasdaq, had many examples of gains through the automation of financial functions, such as customer service and equity research.

Early use of AWS by financials included things such as back-testing portfolios of investments to predict performance, the kind of workload that is “well-suited to cloud” because it requires computer simulations “to really work well in parallel,” said Kaine.

“That ability to be able to do research much more quickly in AWS meant that investment research firms could quickly see those benefits,” he said. “You’ve seen that repeated across the industry regardless of the firm.”

Taking advantage of the tech

Early implementations of Gen AI are showing many commonalities across firms. “They’ll be repeatable patterns, whether it’s document processing that could show up as mortgage automation with PennyMac, or claims processing with The Travelers Companies.”

Such processes come with an extra degree of sensitivity, Kain said, given the regulated status of finance. “Not only do they have a priority on resilience as well as security, they have evidence that is in a far greater degree than any other industry because the regulations on financial services are typically very prescriptive,” he explained. “There’s a much higher bar in the industry.”

Also: Amazon’s Andy Jassy says AI will take some jobs but make others more ‘interesting’

Finance has been an early adopter of an AI-based technology invented at AWS, originally called Zelkova, and that is now more generally referred to as “automated reasoning.” The technology combines machine-learning AI with mathematical proofs to formally validate security measures, such as who has access to resources in a bank. 

“It was an effort to allow customers to prove that the security controls they put in place were knowably effective,” said Kain. “That was important for our financial services customers,” including hedge fund Bridgewater and other early adopters.

Now, automated reasoning is also being employed to fix Gen AI.

“You’re seeing that same approach now being taken to improve the performance of large language models, particularly with hallucination reduction,” he said. 

To mitigate hallucinations, or “confabulations,” as the errors in Gen AI are more properly known, AWS’s Bedrock platform for running machine learning programs uses retrieval-augmented generation (RAG). 

The RAG approach involves connecting an LLM to a source of validated information, such as a database. The source serves as a gold standard to “anchor” the models to limit error.

Also: Cisco rolls out AI agents to automate network tasks at ‘machine speed’ – with IT still in control

Once anchored, automated reasoning is applied to “actually allow you to create your own policies that will then give you an extra level of security and detail to make sure that the responses that you’re providing [from the AI model] are accurate.”

The RAG approach, and automated reasoning, are increasingly leading clients in financial services to implement “smaller, domain-specific tasks” in AI that can be connected to a set of specific data, he said. 

Financial firms start with Gen AI use cases in surveys of enterprise use, including automating call centers. “From a large language model perspective, there are actually a number of use cases that we’ve seen the industry achieve almost immediate ROI [return on investment],” said Kain. “The foremost is customer interaction, particularly at the call center.”

AWS customers, including Principal Financial, Ally Financial, Rocket Mortgage, and crypto-currency exchange Coinbase, have all exploited Gen AI to “take those [customer] calls, transcribe them in real time, and then provide information to the agents that provide the context of why customers are calling, plus their history, and then guide them [the human call agents] to the right response.” 

Coinbase used that approach to automate 64% of support calls, up from 19% two years ago, with the aim of reaching 90% in the future.

coinbase-presents-at-amazon-aws-financials-services-summit-nyc-2025

Coinbase presents its findings at AWS Summit.

Tiernan Ray/ZDNET

Finding fresh opportunities

Another area where automation is being used is in monitoring alerts, such as fraud warnings. It’s a bit like AI in cybersecurity, where AI handles a flood of signals that would overwhelm a human analyst or investigator.

Fraud alerts and other warnings “generate a large number of false positives,” said Kain, which means a lot of extra work for fraud teams and other financial staff to “spend a good chunk of their day looking at things that aren’t actually fraud.” 

Instead, “customers can use large language models to help accelerate the investigation process” by summarizing the alerts, and then create a summary report to be given to the human investigator. 

Verafin specializes in anti-money laundering efforts and is an AWS customer using this approach. 

“They’ve shown they can save 80% to 90% of the time it takes to investigate an alert,” he said. 

Also: Think DeepSeek has cut AI spending? Think again

Another automation area is “middle office processing,” including customer inquiries to a brokerage for trade confirmation. 

One AWS client, brokerage Jefferies & Co., has set up “agentic AI” where the AI model “would actually go through their inbox, saying, this is a request for confirming a price” of a securities trade. 

That agent passes the request to another agent to “go out and query a database to get the actual trade price for the customer, and then generate the email” that gets sent to the customer.

“It’s not a huge process, it takes a human, maybe, ten, fifteen minutes to go do it themselves,” said Kain, “but you go from something that was minutes down to seconds through agents.” 

The same kinds of applications have been seen in the mortgage and insurance business, he said, and in energy, with Canada’s Total Energy Services confirming contracts. 

Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create

One of the “most interesting” areas in finance for Gen AI, said Kain, is in investment research. 

Hedge fund Bridgewater uses LLMs to “basically take a freeform text [summary] about an investment idea, break that down into nine individual steps, and, for each step, kick off an [AI] agent that would go understand what data was necessary to answer the question, build a dependency map between the various trade-offs within an investment model, and then write the code to pull real-time data from the investment data store, and then generate a report like a first-year investment professional.”

Credit rating giant Moody’s is using agents to automate memos on credit ratings. However, credit ratings are usually for public companies because only these firms must report their financial data by law. Now, Moody’s peer, S&P Global, has been able to extend ratings to private companies by amassing snippets of data here and there. 

“There’s an opportunity to leverage large language models to scour what’s publicly available to do credit information on private companies,” said Kain. “That allows the private credit market to have better-anchored information to make private credit decisions.”

These represent “just amazing capabilities,” said Kain of the AI use cases.

Moving into new areas

AI is not yet automating many core functions of banks and other financial firms, such as calculating the most complex risk profiles for securities. But, “I think it’s closer than you think,” said Kain.

“It’s not where we’ve completely moved to trusting the machine to generate, let’s say, trading strategies or risk management approaches,” said Kain. 

Also: 5 ways you can plug the widening AI skills gap at your business

However, the beginnings of forecasting and analysis are present. Consider the problem of calculating the impact of new US tariffs on the cash flows of companies. That is “happening today as partially an AI function,” he said. 

Financial firms “are definitely looking at data at scale, reacting to market movements, and then seeing how they should be updating their positions accordingly,” he explained. 

“That ability to ingest data at a global scale is something that I think is so much easier than it was a year ago,” because of Gen AI.

AWS customer Crypto.com, a trading platform for cryptocurrencies, can watch news feeds in 25 different languages using a combination of multiple LLMs. 

“They are able to identify which stories are about currencies, and tell if that is a positive or negative signal, and then aggregate that as inputs to their customers,” for trading purposes. As long as two of the three models monitoring the feeds agreed, “they had conviction that there was a signal there” of value. 

“So, we’re seeing that use of generative AI to check generative AI, if you will, to provide confidence at scale.”

Also: Phishers built fake Okta and Microsoft 365 login sites with AI – here’s how to protect yourself

Those human-centered tasks that remain at the core of banking, insurance, and trading are probably the most valuable in the industry, including the most complex functions, such as creating new derivative products or underwriting initial public offerings. 

Those are areas that will enjoy the “premium” for creativity, in Kain’s view. Yet how much longer these tasks remain centered on human creation is an open question. 

“I wish I had a crystal ball to say how much of that is truly automatable in the next few years,” said Kain. 

“But given the tremendous adoption [of AI], and the ability for us to process data so much more effectively than even just two, three years ago, it’s an exciting time to see where this will all end up.”





Source link

Continue Reading

Trending