Cisco updates Webex, aims to enhance hybrid work experiences with AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Cisco today unveiled AI-powered enhancements across its Webex suite, promising to deliver hybrid work experiences with automation, while protecting customers’ confidentiality and privacy. 

The updates span workspace, collaboration and customer experience categories, built on the Webex platform, and join a long list of AI and machine learning (ML) features already embedded in Cisco products.

The next step forward for such collaboration is video intelligence, which Webex is expanding throughout the conference room operating system RoomOS.

With cinematic meeting experiences, cameras follow individuals through voice and facial recognition to capture the best angle of the active speaker. This ensures focus on the speaker, while making certain that  hybrid workers not physically present in the room can still feel included, according to Cisco.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Once in a generation platform shift

RoomOS uses facial detection, information about where people are sitting in a room and voice location to direct the meeting and provide the best view. The feature individually frames and levels participants at eye height, and in speaker mode, uses audio triangulation from devices and an intelligent beam-forming table microphone to quickly and accurately identify the position of the active speaker. 

Cinematic meetings support a range of camera intelligence features, including speaker mode, frames, presenter and audience tracking and meeting zones.

“AI is fundamentally transforming the way we work and live,” Jeetu Patel, EVP and GM for security and collaboration at Cisco, told VentureBeat. “It has the potential to make collaboration radically more immersive, personalized and efficient.”

Cisco studied what he described as a “once-in-a-generation platform shift” that AI could support. The company’s efforts center around re-imagining hybrid work. 

Targeting hybrid work experiences

With the rise of hybrid work, it’s essential that organizations provide employees with the flexibility to work in different locations and in different ways. To address this, Cisco has introduced three new AI-based features into its Webex suite.

This includes a super resolution function that ensures crystal-clear video in Webex meetings, even in low-bandwidth conditions. This is achieved through deep neural network video recovery that hides choppiness, removes blocking artifacts and reconstructs the face and body to render in high-resolution images and videos.

Another new AI capability is smart re-lighting, which automatically enhances lighting in Webex meetings to ensure that people look their best in any environment. This is particularly useful when working in poor lighting conditions. The algorithm is trained to recognize different scenarios with people in different lighting, and automatically enhances the light on the facial foreground.

The third new capability is a “be right back” update, which automatically puts up a BRB message, blurs the background, and mutes audio when a user steps away from a Webex meeting. This feature saves time and is simple to use. By leveraging a 3D face mesh algorithm, Webex can detect when a user has stepped away and replace their video feed with a BRB indicator until they return. Users can turn their audio and video back on when they are back in front of the screen.

AI-powered chat summaries

As customer expectations continue to rise and organizations handle billions of daily customer interactions, it has become challenging for agents and legacy systems to keep up with the volume and personalization required. To this end, Cisco ​​is introducing new AI capabilities for its customer experience solutions, including Webex Contact Center and Webex Connect.

One of the new capabilities, topic analysis in Webex Contact Center, provides actionable insights to business analysts by surfacing key reasons customers are calling in. This feature is built using an AI large language model (LLM) that aggregates call transcripts and highlights trends for business analysts.

Another capability, agent answers, acts as a real-time coach for human agents by listening and instantly surfacing knowledge-based articles and helpful information for the customer. This capability uses learnings from self-service and automated customer interactions and applies AI to ensure that the highest match probability options are identified first.

Meanwhile, AI-powered chat summaries eliminate the need for agents to read lengthy digital chat histories and provide key takeaways in a quickly digestible format. Lastly, Webex Connect users can now describe the function they want to perform, and AI will generate and return the appropriate code instantly, making it easier to create and iterate customer journeys quickly.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Why SASE will benefit from faster consolidation of networking and security

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Seventy-five percent of enterprises are pursuing vendor consolidation, up from 29% just three years ago, with secure access service edge (SASE) experiencing significant upside growth as a result. SASE is also proving effective at improving enterprise security postures by providing zero trust network access (ZTNA) at scale.

CIOs tell VentureBeat SASE is getting traction because of its potential to streamline consolidation plans while factoring in ZTNA to the endpoint and identities. 

“If I have five different agents, five different vendors on an endpoint, for example, that’s much overhead support to manage, especially when I have all these exceptional cases like remote users and suppliers. So number one is consolidate,” Kapil Raina, vice president of zero trust, identity, and data security marketing at CrowdStrike, told VentureBeat during a recent interview.

Nearly all cybersecurity leaders have consolidating tech stacks on their roadmaps  

Leading cybersecurity providers, including CrowdStrike, Cisco, Fortinet, Palo Alto Networks, VMware and Zscaler, are fast-tracking product roadmaps to turn consolidation into a growth opportunity. Nearly every CISO VentureBeat spoke with mentions consolidation as one of their top three goals for 2023.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

That’s a point not lost on cybersecurity industry leaders. Cynet’s 2022 survey of CISOs found that nearly all have consolidation on their roadmaps, up from 61% in 2021. CISOs believe consolidating their tech stacks will help them avoid missing threats (57%) and reduce the need to find qualified security specialists (56%) while streamlining the process of correlating and visualizing findings across their threat landscape (46%).

At Palo Alto Networks’ Ignite ’22 conference last year, Nikesh Arora, Palo Alto Networks chairman and CEO, shared the company’s vision for consolidation — and it’s core to the company’s strategy.Zero Trust Network Access (ZTNA)

Zero Trust Network Access (ZTNA)

Nikesh added that “customers are actually onto it. They want the consolidation because right now, customers are going through the three biggest transformations ever: They’re going to network security transformation, they’re going through a cloud transformation, and [though] many of them don’t know [it] … they’re about to go to a security operations center (SOC) transformation.” Ignite ’22 showed Palo Alto Networks doubling its R&D and DevOps teams fast-tracking Prisma SASE with new AI-based enhancements.

With a common policy framework and single-pane-of-glass management, Prisma Access is designed to secure hybrid workforces while also providing enterprises with a clear path to consolidating network and security tech stacks, which is what CIOs and CISOs are looking for. Source: Palo Alto Networks Prisma SASE Overview

SASE grows when network and security tech stacks consolidate 

Legacy network architectures can’t keep up with cloud-based workloads, and their perimeter-based security is proving to be too much of a liability, CIOs and CISOs tell VentureBeat anonymously. The risk levels rise to become board-level concerns that give CISOs the type of internal visibility they don’t want. In addition, the legacy network architectures are renowned for poor user experiences and wide security gaps. Esmond Kane, CISO of Steward Health, advises: “Understand that — at its core — SASE is zero trust. We’re talking about identity, authentication, access control and privilege. Start there and then build out.” 

Gartner’s definition of SASE says that “secure access service edge (SASE) delivers converged network and security-as-a-service capabilities, including SD-WAN, SWG, CASB, NGFW and zero trust network access (ZTNA). SASE supports branch offices, remote workers, and on-premises secure access use cases.

“SASE is primarily delivered as a service and enables zero trust access based on the identity of the device or entity, combined with real-time context and security and compliance policies.”

Foundations of SASE

Gartner developed the SASE framework in response to a growing number of client inquiries about adapting existing networking and cybersecurity infrastructure to better support digitally driven ventures.

Enterprises are on the hunt for every opportunity to consolidate tech stacks further. Given SASE’s highly integrated nature, the platform delivers the opportunities CIOs and CISOs need. Combining network-as-a-service and network-security-as-a-service to deliver SASE is why the platform is capitalizing on consolidation so effectively today.

Integrating network-as-a-service and network-security-as-a-service into a unified SASE platform provides real-time data and insights and defines every identity as a security perimeter. Unifying networks and security also strengthens ZTNA, which can scale across every customer, employee, supplier and service endpoint. Source: Gartner, The Future of Network Security Is in the Cloud, August 30, 2019

To become more competitive in SASE without committing all available DevOps and R&D resources to it, nearly all major cybersecurity vendors rely on joint ventures, mergers and acquisitions to get into the market quickly. Cisco’s acquisition of Portshift, Palo Alto Networks’ acquisition of CloudGenix, Fortinet’s acquisition of OPAQ, Ivanti’s acquisition of MobileIron and PulseSecure, Check Point Software Technologies’ acquisition of Odo Security, ZScaler’s acquisition of Edgewise Networks and Absolute Software’s acquisition of NetMotion are just a few of the mergers designed to increase SASE vendors’ competitiveness. 

“One of the key trends emerging from the pandemic has been the broad rethinking of how to provide network and security services to distributed workforces,” writes Garrett Bekker, senior research analyst, security at 451 Research, part of S&P Global Market Intelligence, in the 451 Research note titled “Another day, another SASE fueled deal as Absolute picks up NetMotion.” Garrett continues, “this shift in thinking, in turn, has fueled interest in zero-trust network access (ZTNA) and secure access service edge.”

SASE’s identity-first design further accelerates consolidation  

For an SASE architecture to deliver on its full potential of consolidating network and security services to the tech stack level, it must first get real-time network activity monitoring and role-specific ZTNA access privileges right. Knowing in real time what’s happening with every endpoint, asset, database and transaction request to the identity level is core to ZTNA. It is also essential for continually improving ZTNA security for distributed edge devices and locations. 

ZTNA secures every identity and endpoint, treating each as a security perimeter with multiple digital identities that need constant monitoring and protection. 

SASE is helping close the gaps between network-as-a-service and network security-as-a-service, improving enterprise networks’ speed, security and scale. ZTNA and its related technologies protect endpoints. The increasing number of identities associated with each endpoint increases the risk of relying on legacy network infrastructure that relies only on perimeter-based protection. This is one place SASE and ZTNA are proving their worth.

Identities, access credentials and roles are central to SASE, which is supported by the diverse array of technologies depicted in the above circular diagram. Source: Gartner, The Future of Network Security Is in the Cloud, August 30, 2019

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Sen. Murphy’s tweets on ChatGPT spark backlash from former White House AI policy advisor

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


On Sunday night, Senator Chris Murphy (D-CT) tweeted a shocking claim about ChatGPT — that the model “taught itself” to do advanced chemistry — and AI researchers immediately pushed back in frustration: “Your description of ChatGPT is dangerously misinformed,” Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, wrote in a tweet. “Every sentence is incorrect. I hope you will learn more about how this system actually works, how it was trained, and what its limitations are.”

Suresh Venkatasubramanian, former White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop The Blueprint for an AI Bill of Rights) and professor of computer science at Brown University, said Murphy’s tweets are “perpetuating fear-mongering around generative AI.” Venkatasubramanian recently shared his thoughts with VentureBeat in a phone interview. He talked about the dangers of perpetuating discussions about “sentient” AI that does not exist, as well as what he considers to be an organized campaign around AI disinformation. (This interview has been edited and condensed for clarity.)

VentureBeat: What were your thoughts on Christopher Murphy’s tweets? 

Suresh Venkatasubramanian: Overall, I think the senator’s comments are disappointing because they are perpetuating fear-mongering around generative AI systems that are not very constructive and are preventing us from actually engaging with the real issues with AI systems that are not generative. And to the extent there’s an issue, it is with the generative part and not the AI part. And no alien intelligence is not coming for us, in spite of what you’ve all heard. Sorry, I’m trying to be polite, but I’m struggling a little bit.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

VB: What did you think of his response to the response, where he still maintained something is coming and we’re not ready for it?

Venkatasubramanian: I would say something is already here and we haven’t been ready for it and we should do something about that, rather than worrying about a hypothetical that might be coming that hasn’t done anything yet. Focus on the harms that are already seen with AI, then worry about the potential takeover of the universe by generative AI.

VB: This made me think of our chat from last week or the week before where you talked about miscommunication between the policy people and the tech people. Do you feel like this falls under that context?

Venkatasubramanian: This is worse. It’s not misinformation, it’s disinformation. In other words, it’s overt and organized. It’s an organized campaign of fear-mongering. I have to figure out to what end but I feel like the goal, if anything, is to push a reaction against sentient AI that doesn’t exist so that we can ignore all the real problems of AI that do exist. I think it’s terrible. I think it’s really corrupting our policy discourse around the real impacts that AI is having — you know, when Black taxpayers are being audited at three times the rates of white taxpayers, that is not a sentient AI problem. That is an automated decision system problem. We need to fix that problem.

VB: Do you think Sen. Murphy just doesn’t understand, or do you think he’s actually trying to promote disinformation?

Venkatasubramanian: I don’t think the Senator is trying to promote disinformation. I think he’s just genuinely concerned. I think everyone is generally concerned. ChatGPT has heralded a new democratization of fear. Those of us who have been fearful and terrified for the last decade or so are now being joined by everyone in the country because of ChatGPT. So they are seeing now what we’ve been concerned about for a long time. I think it’s good to have that level of elevation of the concerns around AI. I just wish the Senator was not falling into the trap laid by the rhetoric around alien intelligence that frankly has forced people who are otherwise thoughtful to succumb to it. When you get New York Times op-eds by people who should know better, then you have a problem.

VB: Others pointed out on Twitter that anthropomorphizing ChatGPT in this way is also a problem. Do you think that’s a concern?

Venkatasubramanian: This is a deliberate design choice, by ChatGPT in particular. You know, Google Bard doesn’t do this. Google Bard is a system for making queries and getting answers. ChatGPT puts little three dots [as if it’s] “thinking” just like your text message does. ChatGPT puts out words one at a time as if it’s typing. The system is designed to make it look like there’s a person at the other end of it. That is deceptive. And that is not right, frankly.

VB: Do you think Senator Murphy’s comments are an example of what’s going to come from other leaders with the same sources of information about generative AI?

Venkatasubramanian: I think there’s again, a concerted campaign to send only that message to the folks at the highest levels of power. I don’t know by who. But when you have a show-and-tell in D.C. and San Francisco with deep fakes, and when you have op-eds being written talking about sentience, either it’s a collective mass freakout or it’s a collective loss freakout driven by the same group of people.

I will also say that this is a reflection of my own frustration with the discourse, where I feel like we were heading in a good direction at some point and I think we still are among the people who are more thoughtful and are thinking about this in government and in policy circles. But ChatGPT has changed the discourse, which I think is appropriate because it has changed things.

But it has also changed things in ways that are not helpful. Because the hypotheticals around generative AI are not as critical as the real harms. If ChatGPT is going to be used, as is being claimed, in a suicide hotline, people are gonna get hurt. We can wait till then, or we can start saying that any system that gets used as a suicide hotline needs to be under strict guidance. And it doesn’t matter if it’s ChatGPT or not. That’s my point.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Emperia outfits Tommy Hilfiger with cross-metaverse virtual hub

Premium lifestyle brand Tommy Hilfiger today launched a cross-metaverse virtual hub in partnership with 3D technology and virtual reality (VR) platform provider Emperia. As part of the launch, the retailer is simultaneously unveiling several virtual experiences across various platforms, including Decentraland, Roblox, Spatial, DressX and Ready Player Me. 

To simplify the process of navigating among these virtual worlds, the Emperia platform will provide a central hub to easily move in and out of each experience.

The Tommy Hilfiger metaverse hub will be available online starting today.

Because today’s metaverse distribution requires placement on multiple non-integrating platforms, companies face special challenges in assuring consistent branding. Emperia’s interoperable approach, as in its collaboration with Tommy Hilfiger, bridges the gap and enables multi-world experiences to be integrated into brands’ Web3 and ecommerce strategies.

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Emperia’s gateway connects fragmented environments, providing users with a seamless experience that enables them to enjoy each platform’s unique capabilities without having to choose among them. As a result, the hub creates a new, unique brand experience said to exceed physical retail alternatives. 

Streaming a cross-metaverse with style

The Emperia-created hub features DressX-powered digital fashion, Web3 artist collaborations with Vinnie Hagar, AR features, a photo booth, gamification and a community-focused competition to create AI fashion. Tommy Hilfiger’s well-known “TH” monogram will appear across all platforms, creating a unified digital brand story, while providing movement between the retailer’s website and the various metaverses, delivering an end-to-end shopping journey with a unique impact.

Emperia’s rendering capabilities standardize graphic quality across platforms, creating an easily accessible and high-performance experience without requiring users to download any special software. The experience is available on almost any device.

Some metaverse platforms limit payment options to cryptocurrency. By integrating with the retailer’s ecommerce platform, Emperia provides users with a wider range of payment options, reducing friction and increasing user confidence, resulting in higher online sales.

Overall, with the cross-metaverses hub, Emperia aims to introduce a new layer of interoperability, blurring the frontiers of Web3, and enabling connections between the metaverse, ecommerce, entertainment and direct performance, all backed by data. Meanwhile, Emperia’s dataset capabilities allow granular insights into the user journey and engagement across different metaverse experiences, enabling a cross-data approach that was never offered before.

Emperia and Tommy Hilfiger: Dressed for iconic success

The Tommy Hilfiger digital hub also aims to improve the product experience by offering four exclusive items, with the iconic Varsity Jacket taking the lead, presented in various aesthetic representations across all platforms.

Customers can purchase the jacket in two forms: physical, connected to Tommy’s ecommerce platform, and digital, connected to the DressX digital fashion platform. The Emperia hub provides access to the physical jacket for sale, while the Ready Player Me platform offers the digital version, which can be used across various games and environments, increasing the interoperability options.

“Emperia is continuing to change the face of virtual retail, pushing the envelope and supporting retailers along their ecommerce transition journey,” said Olga Dogadkina, co-founder and CEO of Emperia. She highlighted the industry’s movement towards collaboration, with each technology vendor leveraging its unique capabilities and traits under Emperia’s virtual environments.

The collaboration with Tommy Hilfiger and the PVH group is an example of this, creating a brand-new digital retail environment that enhances user experience and encourages brand engagement and shopper loyalty by consolidating the fragmented industry into a streamlined experience.

The ultimate goal is to increase ecommerce performance by centralizing the payment process and allowing users to freely navigate across the retailer’s online properties.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Emperia outfits Tommy Hilfiger with cross-metaverse virtual hub

Premium lifestyle brand Tommy Hilfiger today launched a cross-metaverse virtual hub in partnership with 3D technology and virtual reality (VR) platform provider Emperia. As part of the launch, the retailer is simultaneously unveiling several virtual experiences across various platforms, including Decentraland, Roblox, Spatial, DressX and Ready Player Me. 

To simplify the process of navigating among these virtual worlds, the Emperia platform will provide a central hub to easily move in and out of each experience.

The Tommy Hilfiger metaverse hub will be available online starting today.

Because today’s metaverse distribution requires placement on multiple non-integrating platforms, companies face special challenges in assuring consistent branding. Emperia’s interoperable approach, as in its collaboration with Tommy Hilfiger, bridges the gap and enables multi-world experiences to be integrated into brands’ Web3 and ecommerce strategies.

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Emperia’s gateway connects fragmented environments, providing users with a seamless experience that enables them to enjoy each platform’s unique capabilities without having to choose among them. As a result, the hub creates a new, unique brand experience said to exceed physical retail alternatives. 

Streaming a cross-metaverse with style

The Emperia-created hub features DressX-powered digital fashion, Web3 artist collaborations with Vinnie Hagar, AR features, a photo booth, gamification and a community-focused competition to create AI fashion. Tommy Hilfiger’s well-known “TH” monogram will appear across all platforms, creating a unified digital brand story, while providing movement between the retailer’s website and the various metaverses, delivering an end-to-end shopping journey with a unique impact.

Emperia’s rendering capabilities standardize graphic quality across platforms, creating an easily accessible and high-performance experience without requiring users to download any special software. The experience is available on almost any device.

Some metaverse platforms limit payment options to cryptocurrency. By integrating with the retailer’s ecommerce platform, Emperia provides users with a wider range of payment options, reducing friction and increasing user confidence, resulting in higher online sales.

Overall, with the cross-metaverses hub, Emperia aims to introduce a new layer of interoperability, blurring the frontiers of Web3, and enabling connections between the metaverse, ecommerce, entertainment and direct performance, all backed by data. Meanwhile, Emperia’s dataset capabilities allow granular insights into the user journey and engagement across different metaverse experiences, enabling a cross-data approach that was never offered before.

Emperia and Tommy Hilfiger: Dressed for iconic success

The Tommy Hilfiger digital hub also aims to improve the product experience by offering four exclusive items, with the iconic Varsity Jacket taking the lead, presented in various aesthetic representations across all platforms.

Customers can purchase the jacket in two forms: physical, connected to Tommy’s ecommerce platform, and digital, connected to the DressX digital fashion platform. The Emperia hub provides access to the physical jacket for sale, while the Ready Player Me platform offers the digital version, which can be used across various games and environments, increasing the interoperability options.

“Emperia is continuing to change the face of virtual retail, pushing the envelope and supporting retailers along their ecommerce transition journey,” said Olga Dogadkina, co-founder and CEO of Emperia. She highlighted the industry’s movement towards collaboration, with each technology vendor leveraging its unique capabilities and traits under Emperia’s virtual environments.

The collaboration with Tommy Hilfiger and the PVH group is an example of this, creating a brand-new digital retail environment that enhances user experience and encourages brand engagement and shopper loyalty by consolidating the fragmented industry into a streamlined experience.

The ultimate goal is to increase ecommerce performance by centralizing the payment process and allowing users to freely navigate across the retailer’s online properties.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

MetaKing Studios partners with Polygon Labs on Blocklords game

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.


MetaKing Studios announced today that it is partnering with Polygon Labs for the launch of Blocklords, its upcoming medieval strategy game, in the metaverse. The latter’s Ethereum Layer-2 scaling solutions will power the game’s player economy, where players can become rich rulers and warlords through tactical battles. MetaKing plans to launch the game sometime later this year.

Blocklords is a strategy game similar to Age of Empires or Crusader Kings. Players begin as characters of varying social backgrounds and build their fortunes via war with their neighbors or from collecting taxes. It also has a dynastic inheritance system where characters’ traits pass to their children. At the time of writing, it has 265,000 users registered to play.

David Johansson, Blocklords’ CEO, said in a statement, “We’re thrilled to announce our collaboration with Polygon Labs and the expansion of Blocklords’ in-game economy. It is truly remarkable to see what the Polygon Labs team has been doing for mainstream adoption, and we are thrilled and lucky to have them on board as we move toward game launch later this year.”

MetaKing raised $15 million last year for Blocklords, with funding from Makers Fund, Bitkraft Ventures and Animoca Brands among others.

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Polygon’s scaling helps expand the in-game world and its in-game assets. Urvit Goel, Polygon Labs’ head of global business development, said, “Blocklords is unlocking the power of Web3 for gamers by allowing them to benefit from digital asset ownership and utility. Supported by Polygon’s scalable, low-cost solution, Blocklords has the potential to go stratospheric.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Hugging Face reveals generative AI performance gains with Intel hardware

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Nvidia’s A100 GPU accelerator has enabled groundbreaking innovations in generative AI, powering cutting-edge research that is reshaping what artificial intelligence can achieve.

But in the fiercely competitive field of AI hardware, others are vying for a piece of the action. Intel is betting that its latest data center technologies — including a new Intel Xeon 4th generation Sapphire Rapids CPU and an AI-optimized Habana Gaudi2 GPU — can provide an alternative platform for machine learning training and inference.

On Tuesday, Hugging Face, an open-source machine learning organization, released a series of new reports showing that Intel’s hardware delivered substantial performance gains for training and running machine learning models. The results suggest that Intel’s chips could pose a serious challenge to Nvidia’s dominance in AI computing.

The Hugging Face data reported that the Intel Habana Gaudi2 was able to run inference 20% faster on the 176 billion-parameter BLOOMZ model than it could on the Nvidia A100-80G. BLOOMZ is a variant of BLOOM (an acronym for BigScience Large Open-science Open-access Multilingual Language Model), which had its first big release in 2022 providing support for 46 different human languages. Going a step further, Hugging Face reported that the smaller 7 billion-parameter version of BLOOMZ will run three times faster than the A100-80G, running on the Intel Habana Gaudi2.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

On the CPU side, Hugging Face is publishing data showing the increase in performance for the latest 4th Generation Intel Xeon CPU in comparison to the prior 3rd generation version. According to Hugging Face, Stability AI’s Stable Diffusion text-to-image generative AI model runs 3.8 times faster without any code changes. With some modification, including the use of the Intel Extension for PyTorch with Bfloat16, a custom format for machine learning, Hugging Face said it was able to get nearly a 6.5-times speed improvement. Hugging Face has posted an online demonstration tool to allow anyone to experience the speed difference.

“Over 200,000 people come to the Hugging Face Hub every day to try models, so being able to offer fast inference for all models is super important,” Hugging Face product director Jeff Boudier told VentureBeat. “Intel Xeon-based instances allow us to serve them efficiently and at scale.”

Of note, the new Hugging Face performance claims for Intel hardware did not do a comparison against the newer Nvidia H100 Hopper-based GPUs. The H100 has only recently become available to organizations like Hugging Face, which, Boudier said, has been able to do only limited testing thus far with it.

Intel’s strategy for generative AI is end-to-end

Intel has a focussed strategy for growing the use of its hardware in the generative AI space. It’s a strategy that involves both training and inference, not just for the biggest large language models (LLMs) but also for real use cases, from the cloud to the edge.

“If you look at this generative AI space, it’s still in the early stages and it has gained a lot of hype with ChatGPT in the last few months,” Kavitha Prasad, Intel’s VP and GM datacenter, AI and cloud, execution and strategy, told VentureBeat. “But the key thing is now taking that and translating it into business outcomes, which is still a journey that’s to be had.”

Prasad emphasized that an important part of Intel’s strategy for AI adoption is enabling a “build once and deploy everywhere” concept. The reality is that very few companies can actually build their own LLMs. Rather, typically an organization will need to fine-tune existing models, often with the use of transfer learning, an approach that Intel supports and encourages with its hardware and software.

With Intel Xeon-based servers deployed in all manner of environments including enterprises, edge, cloud and telcos, Prasad noted that Intel has big expectations for the wide deployment of AI models.

“Coopetition” with Nvidia will continue with more performance metrics to come

While Intel is clearly competing against Nvidia, Prasad said that in her view it’s a “coopetition” scenario, which is increasingly common across IT in general.

In fact, Nvidia is using the 4th Generation Intel Xeon in some of its own products, including the DGX100 that was announced in January.

“The world is going towards a ‘coopetition’ environment and we are just one of the participants in it,” Prasad said.

Looking forward, she hinted at additional performance metrics from Intel that will be “very positive.” In particular, the next round of MLcommons MLperf AI benchmarking results are due to be released in early April. She also hinted that more hardware is coming soon, including a Habana Guadi3 GPU accelerator, though she did not provide any details or timeline.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Hugging Face reveals generative AI performance gains with Intel hardware

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Nvidia’s A100 GPU accelerator has enabled groundbreaking innovations in generative AI, powering cutting-edge research that is reshaping what artificial intelligence can achieve.

But in the fiercely competitive field of AI hardware, others are vying for a piece of the action. Intel is betting that its latest data center technologies — including a new Intel Xeon 4th generation Sapphire Rapids CPU and an AI-optimized Habana Gaudi2 GPU — can provide an alternative platform for machine learning training and inference.

On Tuesday, Hugging Face, an open-source machine learning organization, released a series of new reports showing that Intel’s hardware delivered substantial performance gains for training and running machine learning models. The results suggest that Intel’s chips could pose a serious challenge to Nvidia’s dominance in AI computing.

The Hugging Face data reported that the Intel Habana Gaudi2 was able to run inference 20% faster on the 176 billion-parameter BLOOMZ model than it could on the Nvidia A100-80G. BLOOMZ is a variant of BLOOM (an acronym for BigScience Large Open-science Open-access Multilingual Language Model), which had its first big release in 2022 providing support for 46 different human languages. Going a step further, Hugging Face reported that the smaller 7 billion-parameter version of BLOOMZ will run three times faster than the A100-80G, running on the Intel Habana Gaudi2.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

On the CPU side, Hugging Face is publishing data showing the increase in performance for the latest 4th Generation Intel Xeon CPU in comparison to the prior 3rd generation version. According to Hugging Face, Stability AI’s Stable Diffusion text-to-image generative AI model runs 3.8 times faster without any code changes. With some modification, including the use of the Intel Extension for PyTorch with Bfloat16, a custom format for machine learning, Hugging Face said it was able to get nearly a 6.5-times speed improvement. Hugging Face has posted an online demonstration tool to allow anyone to experience the speed difference.

“Over 200,000 people come to the Hugging Face Hub every day to try models, so being able to offer fast inference for all models is super important,” Hugging Face product director Jeff Boudier told VentureBeat. “Intel Xeon-based instances allow us to serve them efficiently and at scale.”

Of note, the new Hugging Face performance claims for Intel hardware did not do a comparison against the newer Nvidia H100 Hopper-based GPUs. The H100 has only recently become available to organizations like Hugging Face, which, Boudier said, has been able to do only limited testing thus far with it.

Intel’s strategy for generative AI is end-to-end

Intel has a focussed strategy for growing the use of its hardware in the generative AI space. It’s a strategy that involves both training and inference, not just for the biggest large language models (LLMs) but also for real use cases, from the cloud to the edge.

“If you look at this generative AI space, it’s still in the early stages and it has gained a lot of hype with ChatGPT in the last few months,” Kavitha Prasad, Intel’s VP and GM datacenter, AI and cloud, execution and strategy, told VentureBeat. “But the key thing is now taking that and translating it into business outcomes, which is still a journey that’s to be had.”

Prasad emphasized that an important part of Intel’s strategy for AI adoption is enabling a “build once and deploy everywhere” concept. The reality is that very few companies can actually build their own LLMs. Rather, typically an organization will need to fine-tune existing models, often with the use of transfer learning, an approach that Intel supports and encourages with its hardware and software.

With Intel Xeon-based servers deployed in all manner of environments including enterprises, edge, cloud and telcos, Prasad noted that Intel has big expectations for the wide deployment of AI models.

“Coopetition” with Nvidia will continue with more performance metrics to come

While Intel is clearly competing against Nvidia, Prasad said that in her view it’s a “coopetition” scenario, which is increasingly common across IT in general.

In fact, Nvidia is using the 4th Generation Intel Xeon in some of its own products, including the DGX100 that was announced in January.

“The world is going towards a ‘coopetition’ environment and we are just one of the participants in it,” Prasad said.

Looking forward, she hinted at additional performance metrics from Intel that will be “very positive.” In particular, the next round of MLcommons MLperf AI benchmarking results are due to be released in early April. She also hinted that more hardware is coming soon, including a Habana Guadi3 GPU accelerator, though she did not provide any details or timeline.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

MetaKing Studios partners with Polygon Labs on Blocklords game

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.


MetaKing Studios announced today that it is partnering with Polygon Labs for the launch of Blocklords, its upcoming medieval strategy game, in the metaverse. The latter’s Ethereum Layer-2 scaling solutions will power the game’s player economy, where players can become rich rulers and warlords through tactical battles. MetaKing plans to launch the game sometime later this year.

Blocklords is a strategy game similar to Age of Empires or Crusader Kings. Players begin as characters of varying social backgrounds and build their fortunes via war with their neighbors or from collecting taxes. It also has a dynastic inheritance system where characters’ traits pass to their children. At the time of writing, it has 265,000 users registered to play.

David Johansson, Blocklords’ CEO, said in a statement, “We’re thrilled to announce our collaboration with Polygon Labs and the expansion of Blocklords’ in-game economy. It is truly remarkable to see what the Polygon Labs team has been doing for mainstream adoption, and we are thrilled and lucky to have them on board as we move toward game launch later this year.”

MetaKing raised $15 million last year for Blocklords, with funding from Makers Fund, Bitkraft Ventures and Animoca Brands among others.

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Polygon’s scaling helps expand the in-game world and its in-game assets. Urvit Goel, Polygon Labs’ head of global business development, said, “Blocklords is unlocking the power of Web3 for gamers by allowing them to benefit from digital asset ownership and utility. Supported by Polygon’s scalable, low-cost solution, Blocklords has the potential to go stratospheric.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Tears of the Kingdom trailer shows new item-fusing ability

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.


Nintendo dropped a new trailer today for the upcoming Legend of Zelda: Tears of the Kingdom trailer. It shows not only the updated version of Hyrule, but also some of the new abilities Link can use to survive in this more vertically inclined world. The most significant ability shown in the trailer is a new fusing mechanic that allows Link to magically glue multiple items together.

Eiji Aonuma, producer of the Zelda series, introduced the gameplay abilities in the trailer. The first and most obvious use of this new fusing ability is to fuse weapons together in the matter of Dead Rising. This allows Link to find new and effective weapon combinations for different purposes. In the trailer, Link fuses a branch and a boulder together to make a hammer, and an arrow and eyeball to make a homing arrow.

Aonuma also shows how the fusing mechanic can also be used to create new object by fusing logs and fans together to make a boat. Another variation on the ability, called Ultrahand, allows Link to detach fused objects.

The trailer also shows a few other abilities in addition to fusing. One is an ascension mechanic, which allows Link to target the ceiling of any enclosed space he’s in and magically jump through it to whatever surface is above. Another is a rewind ability, which Link uses in the trailer on a fallen rock to ascend swiftly to the sky islands above Hyrule.

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Nintendo has not shown much of Tears of the Kingdom up to this point, despite the game’s release date being less than two months away. This new trailer also reveals the new, Zelda-branded Switch in conjunction with the game’s release. Tears of the Kingdom officially launches on May 12.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}