MedyMatch, Samsung NeuroLogica bring AI to stroke care

Artificial intelligence is continuing to make its mark in the healthcare field.

Tel Aviv, Israel-based MedyMatch Technology and Danvers, Massachusetts-based Samsung NeuroLogica have joined forces to use artificial intelligence to assist patients in prehospital environments.

MedyMatch is an artificial intelligence company. “Our business is based on machine learning,” MedyMatch CEO Gene Saragnese said in a phone interview with MedCity.

Advertisement

Samsung NeuroLogica is the healthcare subsidiary of Samsung Electronics. “NeuroLogica has been in the CT business for many years,” Saragnese said.

The alliance, which brings together MedyMatch’s AI clinical decision support tools and Samsung NeuroLogica’s medical imaging hardware, was a smart move for the companies, according to Saragnese. “There’s a strong overlap between the two companies,” he said.

Initially, the companies plan to focus on assessing stroke patients. MedyMatch’s AI technologies will be integrated into mobile stroke units and other emergency vehicles that have a portable Samsung NeuroLogica CereTom CT scanner. Through this, the care team will more easily be able to assess whether the patient’s stroke is due to a hemorrhage or a blood clot.

Many of the nearly 800,000 Americans who experience a stroke each year have an ischemic stroke, which can be treated with a tissue plasminogen activator. The tPA must be administered to the patient within three hours of initial signs of stroke, but “it can take an hour after a stroke patient arrives in the emergency department to receive treatment because of the time needed to determine which kind of stroke the patient is having,” the companies point out in a release. By collaborating, MedyMatch and Samsung NeuroLogica are hoping to quicken the treatment process for stroke patients en route to the hospital.

“In stroke care, time is absolutely critical,” Saragnese said. “We want to improve the confidence physicians have in making these decisions.”

But MedyMatch’s goal goes farther than that. Saragnese told MedCity that MedyMatch strives to improve clinical outcomes and ultimately save money. “What we want to do is improve the quality of diagnosis and speed of treatment, and more people will recover from stroke,” he said. “There will also be fewer people in long-term care, and then there will be cost savings.”

MedyMatch launched in February 2016. Though it’s a startup, the company has already begun to make its mark in the healthcare field. Last June, it partnered with Capital Health in New Jersey. Capital Health vowed to help MedyMatch develop a clinical decision support tool for stroke care.

Photo: John Lund, Getty Images

Some of the most exciting (and scary) aspects of machine learning that you may not know about

The decibel of chatter around artificial intelligence is rising to the point where many are inclined to dismiss it as hype. It’s unfair because while certain aspects of the technology are a long way away from becoming mainstream tech, like self-driving cars, it’s a fascinating topic. After listening to a talk recently by Dr. Eric Horvitz, Microsoft Research managing director, I can appreciate that the number of applications being conceived around the technology is only matched by the ethical dilemmas surrounding it. But in both cases, they are much more varied than what typically dominates the conversation about AI.

For fans of the ethical roads less traveled in AI, Horvitz offered a fair few items for his audience to consider at the SXSW conference last week that alternated between hope for the human condition and fear for it. Although I previously highlighted some of the healthcare applications he discussed, there are plenty of issues he raised that one day could be just as relevant to healthcare. I have included a few of them here.

Interpreting facial expressions

Advertisement

The idea of machine learning being applied to make people more connected to each other improve in subtle ways our communication skills is fascinating to me. One example used was a blind man conducting a meeting and receiving auditory cues on the facial expressions of his audience. The idea is to provide more insight on the people around him so he can have a better sense of how the points he raised are perceived beyond what the people in the meeting actually say. In a practical way, it gives him an additional layer of knowledge he wouldn’t have otherwise and makes him feel more connected to others.

The ethical decisions of self-driving cars

As exciting as the prospect of self-driving cars is, Horvitz called attention to some of the still unresolved, important questions of how they would perform in an accident or when trying to avoid an accident. What decisions would the computer make when, say a collision with a pedestrian is likely and the car has to make a split-second choice? Does it preserve the life of the driver or the pedestrian, if it comes to that?  What responsibility does the manufacturer have?  What values will be embedded in the system? How should manufacturers disclose this information?

Horvitz slide

A slide that was part of Dr. Eric Horvitz’s talk at SXSW this year.

Adversarial machine learning

One fascinating topic addressed in the talk was how machine learning could be used with negative intent —referred to as adversarial machine learning. It involves feeding a computer information that changes how it interprets images, words, and how it processes information. In one study, a computer that was fed images of a stop sign could be retained to interpret those images as a yield sign. That has important implications for self-driving cars and automated tasks in other sectors.

Another facet of adversarial machine learning is the use of information tracking individuals’ Web searches, likes and dislikes shared in social networks and the kinds of content they tend to click on and using that information to manipulate these people. That could cover a wide swathe of misdeeds from manipulation through fake Tweets designed by neural networks in the personality of the account holder to particularly nasty phishing attacks. Horvitz noted that these AI attacks on human minds will be an important issue in our lifetime.

“We’re talking about technologies that will touch us in much more intimate ways because they are the technologies of intellect,” Horvitz said.

Appling AI to judicial sentencing software

Although machine learning for clinical decision support tools is an area of interest in healthcare to help identify patients at risk of readmission or to analyze medical images for patterns and anomalies, it’s also entering the realm of judicial sentencing. The concern is that these software tools that some states permit judges to use in determining sentencing include the bias of their human creators and further erode confidence in the legal system. ProPublica drew attention to the issue last year.

Wrestling with ethical issues and challenges of AI

Although he likened the stage of AI development to the first airplane flight by the Wright Brothers at Kittyhawk, North Carolina which was 20 feet off the ground and lasted all of 12 seconds. But the risk and challenge of many technologies is that a certain point it can progress far faster than anyone can anticipate. This is why there’s been a push to wrestle with the ethical issues of AI rather than address them after the fact in a reactive way, such as Partnership on AI. Eight years ago, Stanford University set up AI100, an initiative to study AI for the next 100 years. The idea is that the group will study and anticipate how the effects of artificial intelligence will impact every aspect of how people work and live.

Photo: Andrzej Wojcicki, Getty Images

TwoXAR merges artificial intelligence, drug discovery and… clones?

Artificial intelligence (AI) is steadily reshaping healthcare from all sides, introducing technologies we wouldn’t have thought possible five or 10 years ago.

It’s happening in the clinic (see HealthTap’s Doctor A.I.), it’s happening in diagnostics (see IBM Watson), and now it’s moving into earlier-stage drug discovery with Palo Alto, California-based twoXAR.

“In the couple years that we have been around, we’ve been told hundreds of times that computers cannot do this; that biology is too complex; that this will never work,” said Andrew A. Radin, CEO of the AI-driven biopharmaceutical company. “Yet, in every single disease program where we have run proof-of-concept studies on our novel AI-identified candidates, we have generated efficacious results across standard end points.”

Using a custom-built computational platform, twoXAR works to identify what it calls “unanticipated associations between drug and disease.” With the compounds of interest in hand, the team runs a series of preclinical studies to ‘de-risk’ them. The ultimate aim is to advance the candidates into the clinic through industry and investor partnerships.

How do clones fit into this? They don’t really, but it just so happens that the two cofounders share the same relatively uncommon name, Andrew Radin. Andrew A. Radin is the CEO and Andrew M. Radin is the chief marketing officer. Together they formed twoXAR (two times Andrew Radin) in 2014 with a $3.4 million seed round led by Andreessen Horowitz’s Biofund and Stanford Start X Fund.

One of the selling points is the agnostic approach a computer can take to drug discovery. There’s no human bias, no restrictions on what disease areas can be targeted or what kind of science needs to be done. The platform can sift through both small and large molecule libraries.

It’s an interesting concept, given the high rate of failure (approximately 90 percent) in clinical drug development.

With the rise of AI, many different interpretations are coming to light. In an email forwarded by a company representative, Andrew M. Radin described what AI means to the company.

“In our case, leveraging real-world big biomedical data to build predictive algorithms that can make predictions that can be used to drive rational decision making in drug discovery,” said Andrew M. Radin. “AI is the term that best encapsulates and most succinctly describes what we do in a way that can be understood by our various audiences including investors and potential biopharma partners.”

In February, twoXAR announced a partnership with Osaka, Japan-based Santen Pharmaceutical. Under the agreement, twoXAR will use its AI platform to discover, screen, and prioritize novel drug candidates that are most likely to be able to treat ocular indications, specifically glaucoma.

An earlier project saw the platform applied to hepatocellular carcinoma (HCC or liver cancer). The company screened a library of more than 25,000 potential drug candidates, identifying the 10 top candidates for HCC. Proof-of-concept studies were then performed by The Asian Liver Center at Stanford University. 

“The objective of these experiments was to establish which of the original 10 candidates we identified might be promising liver cancer treatments and generate preliminary preclinical data,” Andrew A. Radin explained. “While we had a few promising results among the 10 candidates, TXR-311 stood out as it killed liver cancer cells with high selectivity. Specifically, very low doses of TXR-311 killed five different liver cancer cell lines. But in healthy liver cells, 500 times as much TXR-311 was needed to cause cell death. In contrast, treating healthy cells with only 3 times the dose of sorafenib [an FDA-approved therapy] needed to kill liver cancer cells is enough to kill healthy cells.”

This recipe for drug discovery hits on an emerging theme within artificial intelligence. Done well, AI doesn’t replace humans and biology, it enhances it. TwoXAR’s computation platform is only one piece of the puzzle, as the company then looks to vet and advance a ‘derisked’ drug candidate into clinical trials.

Another central theme: AI companies have to forge their own path. Computers have never reached these frontiers before.

“Being entrepreneurs, we thrive in hearing ‘no’ as an answer, but interpreting it as ‘not yet,’” Andrew A. Radin said. “We believe our biggest success to date is doing that which people have told us could not be done. Whether that’s discovering novel drug candidates with novel biology using our AI-driven platform or generating efficacious results in proof-of-concept studies or sharing IP on our discoveries with a leading biopharma company.”

Expect more biopharma partnerships in the coming years. In terms of scalability, a quick search on LinkedIn reveals at least eight more Andrew Radins if the company wants to go three or four XAR.

In terms of scalability, a quick search on LinkedIn reveals at least eight more Andrew Radins if the company wants to go three or four XAR.

Photo: ANDRZEJ WOJCICKI, Getty Images

AliveCor launches clinical app with AI function for early detection of AFib to prevent stroke

Screenshot from AliveCor Kardia Pro app for clinicians from AliveCor.

AliveCor, which has developed an FDA-cleared smartphone-enabled ECG device, has launched a clinician-facing app using artificial intelligence to pick up signs of atrial fibrillation earlier, according to a company news release. It’s an interesting development for the business because it can alert physicians to patients with an elevated risk of having a stroke.

The Kardia Pro app is for clinical use. But the goal is to analyze data from patients that includes weight, activity and blood pressure with AI to personalize the heart profiles of each patient, the news release said.

Advertisement

Last year, AliveCor partnered with Omron Healthcare to add Omron’s hypertension screening capabilities to AliveCor’s app.

An estimated 795,000 people suffer a stroke each year, the majority of them for the first time. If you factor in hospitalization, medications and time off of work, strokes cost the U.S. roughly $33 billion each year, according to data from the Centers for Disease Control.

AliveCor also closed a $30 million Series D round led by Omron Healthcare and Mayo Clinic. The funding will be used to speed up innovation in heart health and grow the business.

The launch of the company’s Kardia Pro app is an important milestone for AliveCor. But at a time when the hype around AI has reached a fever pitch, clinical validation will be critical to demonstrate how effective the company is at spotting early signs of life-threatening conditions such as stroke and whether these interventions improve patient outcomes.

Features Photo: Bigstock