GettyImages 1152335789 captis executive search management consulting leadership board services

What the vaccine rollout can teach us about big data and AI

GettyImages 1152335789 captis executive search management consulting leadership board services

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


I’ve spent my entire career looking at data and the world through a scientific lens. Perhaps that’s why when observing the vaccine rollout, I discovered some interesting connections between the challenges governments and scientists have overcome and those most enterprise companies face.

When considering this comparison, I discovered four tech takeaways for enterprise companies, in terms of becoming data driven and innovating with big data and AI:

1. Train your employees to trust the science

When we think about the extensive campaigns and mass education that helped build confidence in the science behind the vaccine, the same needs to happen within the enterprise. There is a tendency for employees to dismiss AI on anecdotal evidence. They lean towards their own biases instead of allowing AI and predictive modeling to do its job.

We see this often among sales reps. They see that their AI tech was off once or twice and dismiss the science completely. Sadly, this can interfere or override the enterprise go-to-market strategy entirely. So, what enterprise companies should do is educate their workforce on how to work with the technology and the data, not against it.

Employees should learn how to look at AI in a scientific way, analyzing its risk-reward effectiveness according to benchmarks and overall pipeline metrics, seeing how AI is impacting their business as a whole instead of on a case-by-case basis.

2. AI can still be powerful in a world where you have a limited sample size

As in the process of developing the vaccine, enterprise companies are also limited to a small data set and quick timeline. They don’t have the luxury to run multiple tests or put in years of research and trials. Neither the Covid-19 virus nor enterprise customers have that kind of patience, unfortunately.

I work in the B2B world and that industry has a fraction of the amount of data compared to B2C. In the case where companies only have a few tens of clients, they would like to use AI to find more. Given that they are using the right methods – selecting the right benchmarks, accurately A/B testing and bringing in additional data from outside their organization — AI can be just as powerful when dealing with a small data set and short timeline.

3. Be aggressive with your timeline

No matter the industry, every company I’ve ever worked with has considered their goal as “high stakes”. Although not as high stakes as developing a vaccine, these businesses still have millions on the line and are solving high stake business problems. So my suggestion to them is to be aggressive.

During the pandemic, I worked closely with a company whose demand increased by 10X because of the nature of their business and our world’s current needs. Before the pandemic, they were using manual solutions, but with such “high stakes” and massive opportunity, they couldn’t afford not to bring on sophisticated AI technology.

On top of making the switch so quickly, they were aggressive in their deployment as well. Every minute was crucial to their sales team, so in most cases, they followed the 80/20 rule of thumb — if 80% of the problem is solved after running AI, then it’s time to go live!

Which brings me to my last point.

4. AI isn’t 100% guaranteed

AI will never be 100% right, which means you need to start with the lower risk and higher gains first, and keep monitoring the performance for potential risks. We saw this with vaccinating the front liners first. In business we do this by focusing on those who need AI to help them make decisions most — usually sales and marketing — and they become our front liners. From there, we apply AI to the remaining departments that will benefit from it.

Now, as a data technologist, it’s natural for me to “trust the science.” I take all of this information — be that around the vaccine or enterprise data — and churn it into statistics and predictions while living quite well with the uncertainty. What the vaccine rollout has done is create a moment in time for the scientific perspective to sink in throughout the world. And when the enterprise joins in on this new wave of scientific thinking, it will drastically change the way AI, big data and technology impact business.

Amnon Mishor is founder and CTO of Leadspace.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

ThePile captis executive search management consulting leadership board services

GPT-3’s free alternative GPT-Neo is something to be excited about

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


The advent of Transformers in 2017 completely changed the world of neural networks. Ever since, the core concept of Transformers has been remixed, repackaged, and rebundled in several models. The results have surpassed the state of the art in several machine learning benchmarks. In fact, currently all top benchmarks in the field of natural language processing are dominated by Transformer-based models. Some of the Transformer-family models are BERT, ALBERT, and the GPT series of models.

In any machine learning model, the most important components of the training process are:

  1. The code of the model — the components of the model and its configuration
  2. The data to be used for training
  3. The available compute power

With the Transformer family of models, researchers finally arrived at a way to increase the performance of a model infinitely: You just increase the amount of training data and compute power.

This is exactly what OpenAI did, first with GPT-2 and then with GPT-3. Being a well funded ($1 billion+) company, it could afford to train some of the biggest models in the world. A private corpus of 500 billion tokens was used for training the model, and approximately $50 million was spent in compute costs.

While the code for most of the GPT language models is open source, the model is impossible to replicate without the massive amounts of data and compute power. And OpenAI has chosen to withhold public access to its trained models, making them available via API to only a select few companies and individuals. Further, its access policy is undocumented, arbitrary, and opaque.

Genesis of GPT-Neo

Stella Biderman, Leo Gao, Sid Black, and others formed EleutherAI with the idea of making AI technology that would be open source to the world. One of the first problems the team chose to tackle was making a GPT-like language model that would be accessible to all.

As mentioned before, most of the code for such a model was already available, so the core challenges were to find the data and the compute power. The Eleuther team set out to generate an open source data set of a scale comparable to what OpenAI used for its GPT language models. This led to the creation of The Pile. The Pile, released in July 2020, is a 825GB data set specifically designed to train language models. It contains data from 22 diverse sources, including academic sources (Arxiv, PubMed, FreeLaw etc.), Internet webpages (StackExchange, Wikipedia etc.), dialogs from subtitles, Github, etc.

ThePile captis executive search management consulting leadership board servicesSource: The Pile paper, Arxiv.

For compute, EleutherAI was able to use idle compute from TPU Research Cloud (TRC). TRC is a Google Cloud initiative that supports research projects with the expectation that the results of the research will be shared with the world via open source code, models, etc.

On March 22, 2021, after months of painstaking research and training, the EleutherAI team released two trained GPT-style language models, GPT-Neo 1.3B and GPT-Neo 2.7B. The code and the trained models are open sourced under the MIT license. And the models can be used for free using HuggingFace’s Transformers platform.

Comparing GPT-Neo and GPT-3

Let’s compare GPT-Neo and GPT-3 with respect to the model size and performance benchmarks and finally look at some examples.

Model size. In terms of model size and compute, the largest GPT-Neo model consists of 2.7 billion parameters. In comparison, the GPT-3 API offers 4 models, ranging from 2.7 billion parameters to 175 billion parameters.
model size captis executive search management consulting leadership board services
Caption: GPT-3 parameter sizes as estimated here, and GPT-Neo as reported by EleutherAI.

As you can see, GPT-Neo is bigger than GPT-2 and comparable to the smallest GPT-3 model.

Performance benchmark metrics. EleutherAI reports that GPT-Neo outperformed the closest comparable GPT-3 model (GPT-3 Ada) on all NLP reasoning benchmarks.

GPT-Neo outperformed GPT-3 Ada on Hellaswag and Piqa. Hellaswag is an intelligent multi-choice sentence completion benchmark that has a context paragraph and four endings. Piqa measures common sense reasoning where the machine has to pick one out of two sentences that make the most sense. GPT-Neo also outperformed GPT-3 Ada on Winogrande, a benchmark that uses common sense to resolve ambiguous pronouns in a sentence.

However GPT-3 Davinci, the largest version of GPT-3, with about 65 times as many parameters, comfortably beats GPT-Neo in all the benchmarks, as you would expect.

benchmarks captis executive search management consulting leadership board services

Caption: Model metrics as reported by EleutherAI, except GPT-3 175B, which is from Open AI’s GPT-3 paper.

Examples. Let’s look at a few side-by-side examples of generated text from the largest GPT-3 model (from various GPT-3 Davinci examples found online) and GPT-Neo (that I generated using HuggingFace’s GPT-Neo 2.7B Transformers implementation).

The first example we will look at is completion of ELI-5 format sentences, where the text in italics was the prompt given to the model.

eli5 captis executive search management consulting leadership board services

I would say both GPT-Neo and GPT-3 worked equally well in this example.

The next experiment is to check if GPT-Neo can correct grammatically incorrect sentences like GPT-3 can. Again the italicized parts are the prompt provided to the model.

grammar correction captis executive search management consulting leadership board services

Again, GPT-Neo worked great in correcting “eat” to “ate,” especially considering the model was not specifically trained to do this.

Finally, the famous “discovery of English-speaking unicorns” piece (GPT-2 version here and GPT-3 version here) reimagined and written by GPT-Neo. The prompt given to the model is in italics and bold.

In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The only evidence was a single word: “hello.”

“The unicorns are very good listeners, so they learned to speak English from watching humans,” said Dr. David Orme, chief executive of the Zoological Society of London, in a press release. “In fact, they also learned to speak Spanish from watching humans. They said hello to us in both languages.”

While the discovery is stunning in and of itself, the news gets even more mind-blowing when you consider that the unicorns were there for thousands of years. They were “living fossils,” Orme said, which is a fancy way of saying that their origins have yet to be determined.

“It is extremely exciting that we now have evidence of a population of unicorns, that we know to be around and that is living,” Dr. John Fitzpatrick, a conservation and anthropology professor at the University of California, said in a press release. “This shows that they are not just a myth and that humans are not the only species on Earth to have developed language.”

The findings have been published in a new study in the journal Proceedings of the Royal Society B: Biological Sciences.

A ‘unicorn’ from the Andean mountains in Colombia. (Image: The Royal Society)

The discovery was made this summer in a remote but stunning valley in the Andean Mountains in Colombia called Bureta. It’s believed the unicorns were in their 20s. “It’s a very unusual place to find these animals and at the moment there is no evidence that humans have been there before,” Orme said.

The scientists said the unicorns had been living in that valley as long as their species has, which is estimated at at least 200,000 years.

This means the area’s rich history of megafauna, including dinosaurs, pterosaurs and saber-toothed cats, is still far from over.

“If it is true in a relatively isolated valley near Bureta Colombia that is more than 200,000 years old and now also having a population of these animals, then Bureta is truly a unique and special place,” Fitzpatrick said.

Once again, GPT-Neo was able to generate a coherent, almost-believable article without missing out on the central themes — unicorn discovery, the English-speaking aspect, the Andes, etc.

All in all, the performance metrics of GPT-Neo 2.7B in NLP benchmarks is better than GPT-3 2.7B (Ada), but much worse than the GPT-3 175B (Davinci). But qualitatively, GPT-Neo 2.7B’s completions and writing were as good as even GPT-3 175B (Davinci), the largest GPT-3 model.

The bottom line here is: GPT-Neo is a great open source alternative to GPT-3, especially given OpenAI’s closed access policy.

Abhishek Iyer is the founder of FreeText AI, a company specializing in text mining and review analysis.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

venturebeat logo captis executive search management consulting leadership board services

For gaming conferences, the future is hybrid

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Did you miss GamesBeat Summit 2021? Watch on-demand here! 


If there was ever a moment in time for the video game industry to change how we conduct business, generate sales, and influence the market to be more inclusive, this might be it.

As a veteran attendee and exhibitor of video game conferences, I’ve spent an immeasurable amount of time, money, and resources attending gaming shows since the early 2000s. Some of my most memorable times have happened during these events. However, increasing costs and jam-packed schedules made it difficult to relax and focus on the event’s objective: establishing and strengthening meaningful relationships.

Physical events remain vital for their local business communities. But ever-rising costs force many of these to march toward economies of scale, relying on large regional and international participation.

This is why virtual events matter.

Analysts forecast that virtual events will grow tenfold over the next decade, alongside the $1 trillion physical events industry. So how can they grow together? By taking a hybrid approach, where events have both a physical and virtual footprint, designed to work together and offer the best of both.

Virtual is not just an alternative, a Plan B. It is an equally valuable companion that gives attendees the opportunity to participate when they otherwise couldn’t. The result is a more cost-effective, accessible, and inclusive event experience that improves the quality for all of us, the stakeholders.

If you’re in the gaming business, at some point during the pandemic you probably explored the virtual event landscape. And like me, you may not have found exactly what you were looking for. The business I have been running and the dream that fuels it needs a new way to market. And when this pandemic ends, the business landscape will be altered.

I’m looking for a fully immersive next-gen experience that scales. Plain and simple. Give me rich graphics, great networking, business matchmaking tools and the ability to do it from the comfort of my web browser. Virtual events that run inside of a web browser make them easily accessible to everyone. Business developers and marketing departments typically don’t have high-end 3D gaming hardware to run native clients. One of my biggest criticisms is making clients download software to their business devices for something that can easily run inside of a web browser. Business developers shouldn’t need to download anything to participate in a modern virtual event.

Perhaps cloud gaming technology is missing its true calling by providing better pixel streaming support for B2B events. Google, Amazon, and Microsoft have the hardware capacity to support this effort today and with some focus on optimization likely at a  much lower cost than what current vGPU prices are at 4 cents per user, per minute.

With that said I recognize this as a once-in-a-lifetime opportunity for us to improve, make things more inclusive and bring us back together.

The future of both virtual and hybrid events has been thrust upon us. The tradeshow industry is going to hustle towards the future. Events are the gaming industry’s lifeblood and they must go on no matter what. We need to future-proof our ecosystem and innovate.

Innovation knows no bounds. Virtual event technology is solving big tech hurdles that prevent truly viable options from existing such as increased concurrent user capacity per session or reducing the cost of pixel streaming to all devices. Virtual events that once may have appeared empty due to networking technology constraints can now be full of vibrant life and interaction. Removing these constraints is the first step in enhancing virtual event-based social experiences.

Equally as important are the responsible integration of communication, broadcasting, networking and interactive technologies that bring us together “outside of the box.

By providing scalable solutions for those that need it, virtual events such as Game Carnival, hosted by Xsolla, can achieve their B2B initiatives at scale. Xsolla was one of the first companies in our industry to attempt a virtual event to support game developers in need of exposure during the pandemic last year. Events such as GCX (formally Guardian Con) hosted by Rare Drop can achieve their charitable goals by supporting St. Jude Children’s Research Hospital. Other examples such as virtual launch parties, tech seminars and game showcases can all benefit in new ways thanks to the rise of virtual event technology. As a GDC veteran, I’d like to see big events such as GDC take a page out of Xsolla’s book and offer a truly immersive experience.

Nowadays, developers don’t have to wait for the next trade show like GDC or CES to learn about a new technology. Consumers don’t have to wait for the next big digital reveal at E3 or GamesCom. Today there exists a new world where access to the next big event is literally at your fingertips. As virtual event technologies mature, anyone anywhere will be able to host a virtual event — and at a low cost — anytime.

Without square footage limitations, we can leverage the powerful gaming engines, such as Unreal Engine or Unity, to run potentially millions of simultaneous sessions to create a real metaverse of conference-goers and consumers, all connected, transacting and sharing. Imagine attending GDC as a hybrid event. Where a million developers can join in on the GDC experience from around the globe from the comfort of their computer chair. Imagine being able to interact in 3D with potential B2B clients of your new technological innovation virtually while doing a physical platform talk at GDC. You, as the presenter, are being streamed into the virtual event, on stage – to a million avatars. This truly is the future where everything begins to scale more easily. With unbound engagement potential, virtual event technology can be the elusive conduit between the games industry and the non-endemics that has long been sought after.

Literally, virtual events technology enables us to disconnect from reality. I’m excited to see what innovations await us. The gaming space is on the cusp of amazing innovative breakthroughs in virtual and hybrid events. Yes, we must hit all the checkboxes for a modern communications tool — but we can be so much more. We can go beyond those real-world limits and experience something completely out of this world.

Vince McMullin is an industry veteran and tech boss having worked with companies such as Microsoft, Nvidia, and Epic Games, with more than 20 years of experience presenting and exhibiting at global tradeshows and conferences.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

security e1614091183550 captis executive search management consulting leadership board services

Adopting zero trust architecture can limit ransomware’s damage

security e1614091183550 captis executive search management consulting leadership board services

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


The fact that a pipeline operator proactively shut down operations to deal with a ransomware attack highlights the fact that organizations are not resilient. From a security perspective, technologies such as zero trust and microsegmentation could have limited the amount of damage ransomware could inflict.

There are many ways for ransomware to enter a network, such as exploiting a known vulnerability, launching phishing and other social engineering attacks, and trying to steal user credentials for network tools (for example, Remote Desktop Protocol, or RDP), Trend Micro Research wrote in a company blog. Once in, attackers move laterally through the networks to find valuable data and establish persistence to stay in the network.

Enterprises should also move ahead with implementing zero trust architecture within their environment to mitigate the effects of this kind of malware, wrote Brian Kime, a senior analyst at research firm Forrester. Zero trust architecture limits lateral movement and contains the blast radius, Kime said.

Many networks rely on perimeter defenses to keep attackers out. Once in, however, there is nothing to prevent the intruder from moving anywhere within the network. Limiting lateral movement reduces potential damage since the attacker is not able to access the most sensitive parts of the network. In the case of ransomware, attackers can cause a lot of damage by locking up systems, disrupting business operations, and threatening to expose corporate data.

Ransomware attack locks up network

Colonial Pipeline, a pipeline operator responsible for transporting 45 percent of the fuel along the East Coast of the United States, proactively shut down operations on May 7 after a ransomware incident in its corporate network. In case of an attack, ransomware encrypts data so that it cannot be accessed without purchasing a decoding tool. Colonial Pipeline shut down operations because the attack affected its billing system and there were concerns the company wouldn’t be able to properly monitor fuel flowing through the pipeline and send out invoices, sources told information security journalist Kim Zetter.

Ransomware group DarkSide was behind the attack against Colonial Pipeline. The group stole over 100 GB of data and then encrypted the files. Victims like Colonial Pipeline pay the ransom — news reports suggest the company paid the attack group $5 million — to speed up data recovery and also in hopes the attackers don’t leak or sell the data for others to see.

The attack group claimed to be sitting on top 1.9 TB of data stolen from multiple victims. Trend Micro Research has identified at least 40 victims affected by DarkSide.

“We have collectively failed to appreciate how fragile these systems are and how easy it is for cyber criminals to affect business operations and potentially create unsafe conditions in industrial environments,” Trend Micro Research wrote. “Colonial Pipeline isn’t the first time ransomware or destructive malware in a corporate network has disrupted or degraded industrial operations and sadly it will not be the last.”

Shifting to zero trust

Zero trust is relatively straightforward: Organizations shouldn’t automatically trust anything trying to connect to their network or access their data. Instead, they should verify everything before granting access. Zero trust architecture does not need to be costly or complex to implement, as enterprises can implement zero trust with current technology and updated policies and standards. One way is to identify automated systems in the environment and using allow lists to restrict access to those systems.

“Zero Trust is not one product or platform; it’s a security framework built around the concept of ‘never trust, always verify’ and ‘assuming breach,’” Forrester analyst Steve Turner wrote earlier this year.

Chris Krebs, the former head of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), said security professionals at every organization should be working to limit ransomware’s impact. Examples include running and testing backups, implementing multifactor authentication (to prevent remote attempts to access user accounts), securing privileged accounts, and giving employees privileged accounts only when requested.

“Your response plan needs to include what happens when you inevitably get infected with ransomware and what that subsequent planning is — that should include both your technology and business departments. It also needs to include who you will contact for help when you’re inevitably hit, which could be your MSSP or another incident response organization that you have on retainer,” wrote Forrester analysts Allie Mellen and Steve Turner echoed Krebs’ advice on the Forrester blog.

The cybersecurity executive order from President Biden and his administration states that federal agencies and private-sector partners have to implement a zero trust framework throughout the federal government. The order calls for multifactor authentication, data encryption both at rest and in transit, a zero trust security model, and improvements in endpoint protection and incident response.

“Incremental improvements will not give us the security we need; instead, the federal government needs to make bold changes and significant investments in order to defend the vital institutions that underpin the American way of life,” the order said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

venturebeat logo captis executive search management consulting leadership board services

Final Fantasy XIV: Endwalker adds the Reaper job

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Did you miss GamesBeat Summit 2021? Watch on-demand here! 


Final Fantasy XIV: Endwalker debuted its second new job class today during the Final Fantasy XIV Digital Fan Fest today. And it did so in grand fashion, with producer Naoki Yoshida taking to the stage dressed in black and wielding a scythe.

Endwalker will be the MMO’s fourth major expansion, and it is coming out this fall. Square Enix revealed the Sage job earlier this year, a new healer class for the game. Now we have the Reaper, a melee damage-dealer who can also summon a shadowy creature to aid them in battle, a voidsent (monsters from a dark, ruined world that we have seen before in the game).

A new trailer for Endwalker — which you can watch below — also revealed that players will be going to Sharlayan, a scholarly city often mentioned in the game but not yet visited. Old Sharlayan will serve as the expansion’s hub area. Oh, and we’re going to the moon too.

The trailer also showed that the game’s villain, Zenos, has himself taken up the scythe and become a Reaper. While most FFXIV jobs come from previous Final Fantasy games, the Reaper is an original creation, although it will be sharing some armor with the Dragoon job.

To unlock Reaper, you’ll have to have progressed at least one other job to level 70. Reaper starts at level 70 (Endwalker will have a level cap of 90). While many RPGs have you pick a class at the start and force you to stick with it, every character in Final Fantasy XIV has the potential to access every job in the game.

Endwalker is ending the main story of Final Fantasy XIV that started with the game’s 2013 relaunch, A Realm Reborn. After this, the game will start a new story.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

gbdecideslongcover captis executive search management consulting leadership board services

E3 2021 has already begun | GB Decides 196

gbdecideslongcover captis executive search management consulting leadership board services

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Did you miss GamesBeat Summit 2021? Watch on-demand here! 


It feels like E3 has already begun, but while some fun announcements are coming, the GamesBeat Decides crew decides to use this episode to get the bad news out of the way. To that end, GamesBeat editors Jeff Grubb and Mike Minotti talk about Elden Ring’s chances to show up at E3.

They also discuss their expectations for Starfield releasing in 2021. Join them, won’t you?

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

TalendDH Graphic captis executive search management consulting leadership board services

Talend: 36% of business leaders don’t rely on data to make decisions

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Even as enterprise leaders tout the importance of data, 36% of business leaders don’t rely on it for making critical decisions, according to a survey by Talend, an open source data integration platform. The same survey found that 78% of business executives face challenges effectively working with data to make decisions.

40% of business leaders still rely on gut decisions, not data.

Above: 40% of business leaders still rely on gut decisions, not data.

Image Credit: Talend

Our relationship with data is not healthy. Talend’s survey found only 40% of executives always trust the data they work with. For decades, managing and using data for analysis was focused on the mechanics: the collecting, cleaning, storing, and cataloging of as much data as possible, then figuring out how to use it later. Companies don’t know what data they have, where it is, or who is using it, and, critically, no way to measure their data health.

Data health is Talend’s vision of a comprehensive system for ensuring the well-being and return of corporate information. Data health offers proactive treatments, quantifiable measures, and preventive steps to identify and correct issues, ensuring that corporate data is clean, complete, and uncompromised.

Data health is a complex journey of unique requirements, regulations, and risk tolerance. It will take substantial market collaboration and research to align on appropriate standards for different companies. Eventually, data health solutions will help create a universal set of metrics to evaluate the health of corporate data and establish it as an essential indicator of the strength of a business. Talend’s initial framework imagines four primary focus areas to establish data health: reliability, visibility, understanding and value. We believe that data health will become a key, if not the most important, performance framework used within and across organizations to monitor and evaluate the health of the company. With this new data health first approach, and new standards, leaders can level the employee playing field and drive a data-charged cultural change.

From March 24th to April 8th, 2021, Talend led a survey via Qualtrics among a base of 529 independent respondents worldwide. (57% North America, 26% Asia-Pacific, 17% Europe). The respondents are all executives — with titles ranging from director to the C-suite — from medium and large companies making more than $10 million in annual revenue.

Read Talend’s full report Data Health Survey.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

adventuresinmagickingdom captis executive search management consulting leadership board services

The RetroBeat: Adventures in the Magic Kingdom is my NES guilty pleasure

adventuresinmagickingdom captis executive search management consulting leadership board services

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Did you miss GamesBeat Summit 2021? Watch on-demand here! 


I love video games based on real theme parks. Heck, I even have a soft spot for that awful Universal Studios title for GameCube. Look, I just like theme parks, and any experience that makes me feel like I’m in one makes me happy.

As a kid, Adventures in the Magic Kingdom was the first game of this kind I ever played. It came out for the NES back in 1990, when I was just 4 years old. Even then, two of my favorite hobbies were already ingrained into my being: video games and Disney.

Capcom made Adventures in the Magic Kingdom. If you are an NES fan, you know that Capcom made a bunch of amazing Disney-based 8-bit games for the console, like DuckTales and Chip ‘n Dale Rescue Rangers. Adventures in the Magic Kingdom isn’t as good as those classics. But I still adore it.

Keys to the kingdom

Adventures in the Magic Kingdom has a plot, if you can believe it. You have to help Mickey Mouse find six keys so that he can start the park’s parade on time. You collect those keys by walking around the Magic Kingdom and partaking in various ride-based minigames.

A lot of the fun comes from just walking around an official 8-bit version of the iconic park. Sure, it’s small and lacks detail. You aren’t going to find every ride from the actual Magic Kingdom represented here. Also, despite the name of the game and Orlando’s Cinderella’s Castle prominent placement on the box, the layout more closely resembles California’s Disneyland. But I still enjoy taking a stroll through this pixelized interpretation of one my favorite places in the world.

Many of the park’s most popular rides host a minigame. For Pirates of the Caribbean and Haunted Mansion, you get 2D sidescroller sections. For Autotopia, you’re in a car race. Space Mountain is a sort of quick time event sequence, making you hit button prompts as you zoom through the galaxy. You even have to walk around the park and answer trivia questions to unlock one of Mickey’s keys.


https://youtube.com/watch?v=Ophx0PoI0Jg&version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en-US&autohide=2&wmode=transparent

None of these minigames are amazing. Even the best of them, those 2D Pirates of the Caribbean and Haunted Mansion, can’t match the quality of the best NES sidescrollers like Capcom’s own Mega Man. But something about the variety of it all does replicate that theme park experience. Each ride-based minigame is different.

Also, the game’s structure means that I get to see every level even if I can’t actually beat any of them, which was definitely the case back when I was a kid.

More theme park video games, please

Disney would actually release something of a spiritual successor to Adventures in the Magic Kingdom with Kinect: Disneyland Adventures for the Xbox 360 in 2011. Really, it’s the same idea. You walk around a virtual Disney park where rides host different minigames. And while the original version required Microsoft’s motion-tracking camera, the game is now available on Xbox One and PC with controller and keyboard support.

I hope that this is a formula Disney will return to again, maybe for a different park. I’d love to have a virtual Epcot to explore. Heck, now I’m sad that we never got an Adventures in the Magic Kingdom sequel back on the NES that used Epcot for its template.

While Disney is king in most entertainment fields, consistent success in gaming has eluded the Mouse House. Maybe it would do better if it tried harder to capture more of that theme park magic in video game form.

The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

image classification vs object detection vs semantic segmentation captis executive search management consulting leadership board services

New deep learning model brings image segmentation to edge devices

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


A new neural network architecture designed by artificial intelligence researchers at DarwinAI and the University of Waterloo will make it possible to perform image segmentation on computing devices with low-power and -compute capacity.

Segmentation is the process of determining the boundaries and areas of objects in images. We humans perform segmentation without conscious effort, but it remains a key challenge for machine learning systems. It is vital to the functionality of mobile robots, self-driving cars, and other artificial intelligence systems that must interact and navigate the real world.

Until recently, segmentation required large, compute-intensive neural networks. This made it difficult to run these deep learning models without a connection to cloud servers.

In their latest work, the scientists at DarwinAI and the University of Waterloo have managed to create a neural network that provides near-optimal segmentation and is small enough to fit on resource-constrained devices. Called AttendSeg, the neural network is detailed in a paper that has been accepted at this year’s Conference on Computer Vision and Pattern Recognition (CVPR).

Object classification, detection, and segmentation

One of the key reasons for the growing interest in machine learning systems is the problems they can solve in computer vision. Some of the most common applications of machine learning in computer vision include image classification, object detection, and segmentation.

Image classification determines whether a certain type of object is present in an image or not. Object detection takes image classification one step further and provides the bounding box where detected objects are located.

Segmentation comes in two flavors: semantic segmentation and instance segmentation. Semantic segmentation specifies the object class of each pixel in an input image. Instance segmentation separates individual instances of each type of object. For practical purposes, the output of segmentation networks is usually presented by coloring pixels. Segmentation is by far the most complicated type of classification task.

image classification vs object detection vs semantic segmentation captis executive search management consulting leadership board services

Above: Image classification vs. object detection vs. semantic segmentation (credit: codebasics).

The complexity of convolutional neural networks (CNN), the deep learning architecture commonly used in computer vision tasks, is usually measured in the number of parameters they have. The more parameters a neural network has the larger memory and computational power it will require.

RefineNet, a popular semantic segmentation neural network, contains more than 85 million parameters. At 4 bytes per parameter, it means that an application using RefineNet requires at least 340 megabytes of memory just to run the neural network. And given that the performance of neural networks is largely dependent on hardware that can perform fast matrix multiplications, it means that the model must be loaded on the graphics card or some other parallel computing unit, where memory is more scarce than the computer’s RAM.

Machine learning for edge devices

Due to their hardware requirements, most applications of image segmentation need an internet connection to send images to a cloud server that can run large deep learning models. The cloud connection can pose additional limits to where image segmentation can be used. For instance, if a drone or robot will be operating in environments where there’s no internet connection, then performing image segmentation will become a challenging task. In other domains, AI agents will be working in sensitive environments and sending images to the cloud will be subject to privacy and security constraints. The lag caused by the roundtrip to the cloud can be prohibitive in applications that require real-time response from the machine learning models. And it is worth noting that network hardware itself consumes a lot of power, and sending a constant stream of images to the cloud can be taxing for battery-powered devices.

For all these reasons (and a few more), edge AI and tiny machine learning (TinyML) have become hot areas of interest and research both in academia and in the applied AI sector. The goal of TinyML is to create machine learning models that can run on memory- and power-constrained devices without the need for a connection to the cloud.

attendseg architecture captis executive search management consulting leadership board services

Above: The architecture of AttendSeg on-device semantic segmentation neural network.

With AttendSeg, the researchers at DarwinAI and the University of Waterloo tried to address the challenges of on-device semantic segmentation.

“The idea for AttendSeg was driven by both our desire to advance the field of TinyML and market needs that we have seen as DarwinAI,” Alexander Wong, co-founder at DarwinAI and Associate Professor at the University of Waterloo, told TechTalks. “There are numerous industrial applications for highly efficient edge-ready segmentation approaches, and that’s the kind of feedback along with market needs that I see that drives such research.”

The paper describes AttendSeg as “a low-precision, highly compact deep semantic segmentation network tailored for TinyML applications.”

The AttendSeg deep learning model performs semantic segmentation at an accuracy that is almost on-par with RefineNet while cutting down the number of parameters to 1.19 million. Interestingly, the researchers also found that lowering the precision of the parameters from 32 bits (4 bytes) to 8 bits (1 byte) did not result in a significant performance penalty while enabling them to shrink the memory footprint of AttendSeg by a factor of four. The model requires little above one megabyte of memory, which is small enough to fit on most edge devices.

“[8-bit parameters] do not pose a limit in terms of generalizability of the network based on our experiments, and illustrate that low precision representation can be quite beneficial in such cases (you only have to use as much precision as needed),” Wong said.

attendseg vs other networks captis executive search management consulting leadership board services

Above: Experiments show AttendSeg provides optimal semantic segmentation while cutting down the number of parameters and memory footprint.

Attention condensers for computer vision

AttendSeg leverages “attention condensers” to reduce model size without compromising performance. Self-attention mechanisms are a series that improve the efficiency of neural networks by focusing on information that matters. Self-attention techniques have been a boon to the field of natural language processing. They have been a defining factor in the success of deep learning architectures such as Transformers. While previous architectures such as recurrent neural networks had a limited capacity on long sequences of data, Transformers used self-attention mechanisms to expand their range. Deep learning models such as GPT-3 leverage Transformers and self-attention to churn out long strings of text that (at least superficially) maintain coherence over long spans.

AI researchers have also leveraged attention mechanisms to improve the performance of convolutional neural networks. Last year, Wong and his colleagues introduced attention condensers as a very resource-efficient attention mechanism and applied them to image classifier machine learning models.

“[Attention condensers] allow for very compact deep neural network architectures that can still achieve high performance, making them very well suited for edge/TinyML applications,” Wong said.

attention condenser architecture captis executive search management consulting leadership board services

Above: Attention condensers improve the performance of convolutional neural networks in a memory-efficient way.

Machine-driven design of neural networks

One of the key challenges of designing TinyML neural networks is finding the best performing architecture while also adhering to the computational budget of the target device.

To address this challenge, the researchers used “generative synthesis,” a machine learning technique that creates neural network architectures based on specified goals and constraints. Basically, instead of manually fiddling with all kinds of configurations and architectures, the researchers provide a problem space to the machine learning model and let it discover the best combination.

“The machine-driven design process leveraged here (Generative Synthesis) requires the human to provide an initial design prototype and human-specified desired operational requirements (e.g., size, accuracy, etc.) and the MD design process takes over in learning from it and generating the optimal architecture design tailored around the operational requirements and task and data at hand,” Wong said.

For their experiments, the researchers used machine-driven design to tune AttendSeg for Nvidia Jetson, hardware kits for robotics and edge AI applications. But AttendSeg is not limited to Jetson.

“Essentially, the AttendSeg neural network will run fast on most edge hardware compared to previously proposed networks in literature,” Wong said. “However, if you want to generate an AttendSeg that is even more tailored for a particular piece of hardware, the machine-driven design exploration approach can be used to create a new highly customized network for it.”

AttendSeg has obvious applications for autonomous drones, robots, and vehicles, where semantic segmentation is a key requirement for navigation. But on-device segmentation can have many more applications.

“This type of highly compact, highly efficient segmentation neural network can be used for a wide variety of things, ranging from manufacturing applications (e.g., parts inspection / quality assessment, robotic control) medical applications (e.g., cell analysis, tumor segmentation), satellite remote sensing applications (e.g., land cover segmentation), and mobile application (e.g., human segmentation for augmented reality),” Wong said.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

big data captis executive search management consulting leadership board services

AI Weekly: How to implement AI responsibly

big data captis executive search management consulting leadership board services

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
border-left: 4px solid #000E31;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
}

.article-content .membership-link {
background-color: #000E31;
color: white;
padding: 10px 30px;
font-family: Roboto, sans-serif;
text-decoration: none;
font-weight: 700;
font-size: 18px;
display: inline-block;
}

.article-content .membership-link:hover {
color: white;
background-color: #0B1A42;
}

.article-content .boilerplate-after h3 {
margin-top: 0;
font-weight: 700;
}

.article-content .boilerplate-after ul li {
margin-bottom: 10px;
}

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Implementing AI responsibly implies adopting AI in a manner that’s ethical, transparent, and accountable as well as consistent with laws, regulations, norms, customer expectations, and organizational values. “Responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable.

But organizations often underestimate the challenges in attaining this. According to Boston Consulting Group (BCG), less than half of enterprises that achieve AI at scale have fully mature, responsible AI deployments. Organizations’ AI programs commonly neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation, BCG analysts found.

The Responsible AI Institute (RAI) is among the consultancies aiming to help companies realize the benefits of AI implemented thoughtfully. An Austin, Texas-based nonprofit founded in 2017 by University of Texas, USAA, Anthem, and CognitiveScale, the firm works with academics, policymakers, and nongovernmental organizations with the goal of “unlocking the potential of AI while minimizing unintended consequences.”

According to chairman and founder Manoj Saxena, adopting AI responsibly requires a wholistic and end-to-end approach, ideally using a multidisciplinary team. There’s multiple ways that AI checks can be put into production, including:

  • Awareness of the context in which AI will be used and could create biased outcomes.
  • Engaging product owners, risk assessors, and users in fact-based conversations about potential biases in AI systems.
  • Establishing a process and methodology to continually identify, test, and fix biases.
  • Continuing investments in new research coming out around bias and AI to make black-box algorithms more responsible and fair.

“[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact,” Saxena told VentureBeat via email. “[They also need to] invest more to ensure members who are designing the systems are diverse.”

Involving stakeholders

Mark Rolston, founder of global product design consultancy Argodesign and advisor at RAI, anticipates that trust in AI systems will become as paramount as the rule of law has been to the past several hundred years of progress. The future growth for AI into more abstract concept processing capabilities will present even more critical needs around trust and validation of AI, he believes.

“Society is becoming increasingly dependent on AI to support every aspect of modern life. AI is everywhere. And because of this we must build systems to ensure that AI is running as intended — that it is trustworthy. The argument is fundamentally that simple,” Rolston told VentureBeat in an interview. “Today we’re bumping up on the fundamental challenge of AI being too focused on literal problem solving. It’s well-understood that the future lies in teaching AI to think more abstractly … For our part as designers, that will demand the introduction of a whole new class of user interface that convey those abstractions.”

Saxena advocates for AI to be designed, deployed, and managed with “a strong orientation toward human and societal impact,” noting that AI evolves with time as opposed to traditional rules-based computing paradigms. Guardrails need to be established to ensure that the right data is fed into AI systems, he says, and that the right testing is done of various models to guarantee positive outcomes.

Responsible AI practices can bring major business value to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

“As the adoption of AI continues into all aspects of our personal and professional lives, the need for ensuring that these AI systems are transparent, accountable, bias-free, and auditable is only going to grow exponentially … On the technology and academic front, responsible AI is going to become an important focus for research, innovation, and commercialization by universities and entrepreneurs alike,” Saxena said. “With the latest regulations on the power of data analytics from the FTC and EU, we see hope in the future of responsible AI that will merge the power and promise of AI and machine learning systems with a world that is fair and balanced.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member