Baby-led weaning won’t necessarily ward off extra weight

When my younger daughter was around 6 months old, we gave her mashed up prune. She grimaced and shivered a little, appearing to be absolutely disgusted. But then she grunted and reached for more.

Most babies are ready for solid food around 6 months of age, and feeding them can be fun. One of the more entertaining approaches does not involve a spoon. Called baby-led weaning, it involves allowing babies to feed themselves appropriate foods.

Proponents of the approach say that babies become more skilled eaters when allowed to explore on their own. They’re in charge of getting food into their own mouths, gumming it and swallowing it down — all skills that require muscle coordination. When the right foods are provided (yes to soft steamed broccoli; no to whole grapes), babies who feed themselves are no more likely to choke than their spoon-fed peers.

Some baby-led weaning proponents also suspected that the method might ward off obesity, and a small study suggested as much. The idea is that babies allowed to feed themselves might better learn how to regulate their food intake, letting hunger and fullness guide them to a reasonable calorie count. But a new study that looked at the BMIs of babies who fed themselves and those who didn’t found that babies grew similarly with either eating style.

A clinical trial of about 200 mother-baby pairs in New Zealand tracked two different approaches to eating and their impact on weight. Half of the moms were instructed to feed their babies as they normally would, which for most meant spoon-feeding their babies purees, at least early on. The other half was instructed that only breast milk or formula was best until 6 months of age, and after that, babies could be encouraged to feed themselves. These mothers also received breastfeeding support.

At the 1- and 2-year marks, the babies’ average BMI z-scores were similar, regardless of feeding method, researchers report July 10 in JAMA Pediatrics. (A BMI z-score takes age and sex into account.) And baby-led weaning actually produced slightly more overweight babies than the other approaches, but not enough to be meaningful. At age 2, 10.3 percent of baby-led weaning babies were considered overweight and 6.4 percent of traditionally-fed babies were overweight. The two groups of babies seemed to take in about the same energy from food, analyses of the nutritional value and amount of food eaten revealed.

The trial found a few other differences between the two groups. Babies who did baby-led weaning exclusively breastfed for longer, a median of about 22 weeks. Babies in the other group were exclusively breastfed for a median of about 17 weeks. Babies in the baby-led weaning group were also more likely to have held off on solid food until 6 months of age.

While baby-led weaning may not protect babies against being overweight, the study did uncover a few perks of the approach. Parents reported that babies who fed themselves seemed less fussy about foods. These babies also reportedly enjoyed eating more (though my daughter’s prune fake-out face is evidence that babies’ inner opinions can be hard to read). Even so, these data seem to point toward a more positive experience all around when using the baby-led weaning approach. That’s ideal for both experience-hungry babies and the parents who get to savor watching them eat.

Seeing an adult struggle before succeeding inspires toddlers to persevere too

I recently wrote about the power that adults’ words can have on young children. Today, I’m writing about the power of adults’ actions. Parents know, of course, that their children keep a close eye on them. But a new study provides a particularly good example of a watch-and-learn moment: Toddlers who saw an adult struggle before succeeding were more likely to persevere themselves.

Toddlers are “very capable learners,” says study coauthor Julia Leonard, a cognitive developmental psychologist at MIT. Scientists have found that these youngsters pick up on abstract concepts and new words after just a few exposures. But it wasn’t clear whether watching adults’ actions would actually change the way toddlers tackle a problem.

To see whether toddlers could soak up an adult’s persistence, Leonard and her colleagues tested 262 13- to 18-month-olds (the average age was 15 months). Some of the children watched an experimenter try to retrieve a toy stuck inside a container. In some cases, the experimenter quickly got the toy out three times within 30 seconds — easy. Other times, the experimenter struggled for the entire 30 seconds before finally getting the toy out. The experimenter then repeated the process for a different problem, removing a carabiner toy from a keychain. Some kids didn’t see any experimenter demonstration.

Just after watching an adult struggle (or not), the toddlers were given a light-up cube. It had a big, useless button on one side. Another button — small and hidden — actually controlled the lights. The kids knew the toy could light up, but didn’t know how to turn the lights on.

Though the big button did nothing, that didn’t stop the children from poking it. But here’s the interesting part: Compared with toddlers who had just watched an adult succeed effortlessly, or not watched an adult do anything at all, the toddlers who had seen the adult struggle pushed the button more. These kids persisted, even though they never found success.

The sight of an adult persevering nudged the children toward trying harder themselves, the researchers conclude in the Sept. 22 Science. Leonard cautions that it’s hard to pull parenting advice from a single laboratory-based study, but still, “there may be some value in letting children see you work hard to achieve your goals,” she says.

Observing the adults wasn’t the only thing that determined the toddlers’ persistence, not by a long shot. Some kids might simply be more tenacious than others. In the experiments, some of the children who didn’t see an experimenter attempt a task, or who saw an experimenter quickly succeed, were “incredibly gritty,” Leonard says. And some of the kids who watched a persistent adult still gave up quickly themselves. That’s not to mention the fact that these toddlers were occasionally tired, hungry and cranky, all of which can affect whether they give up easily. Despite all of this variation, the copycat effect remained, so that kids were more likely to persist when they had just seen a persistent adult.

As Leonard says, this is just one study and it can’t explain the complex lives of toddlers. Still, one thing is clear, and it’s something that we would all do well to remember: “Infants are watching your behavior attentively and actively learning from what you do,” Leonard says.

Lakers vs. Warriors final score, results: Golden State forces Game 6 as Anthony Davis suffers head injury

Faced with a do-or-die situation in Game 5, the Warriors came up big.

A 121-106 win at Chase Center kept Golden State's season alive as they successfully avoided elimination at the hands of the Lakers. The series will now head back down to Los Angeles for Game 6, where LeBron James and Co. will have another chance to punch their ticket to the Western Conference Finals.

Stephen Curry led the way for the Warriors with 27 points and eight assists. Andrew Wiggins finished with 25 points and seven rebounds, while Draymond Green had one of his best games of the postseason with 20 points and 10 rebounds.

A bad night got even worse for LA when Anthony Davis was forced to leave the game late in the fourth quarter. He appeared to take a shot to the face from Golden State's Kevon Looney, and TNT's Chris Haynes reported that he was taken down the tunnel in a wheelchair. The Lakers now face an anxious wait to see whether he'll be good to go for a massive Game 6.

The Sporting News was tracking all the key moments as the Warriors defeated the Lakers in Game 5 of the Western Conference semifinals:

Lakers vs. Warriors score
Team Q1 Q2 Q3 Q4 Final
Lakers 28 31 23 24 106
Warriors 32 38 23 28 121
Lakers vs. Warriors live score, updates, highlights from Game 5
12:31 a.m. FINAL — The final buzzer rings out, and we're headed to a Game 6. Curry finishes with 27 points, Wiggins with 25 and Draymond Green with 20.

12:27 a.m. — Darvin Ham has raised the white flag and sent in his reserves. Golden State is going to get the win, and their season is going to continue. An excellent performance by the Warriors tonight with their backs against the wall.

12:24 a.m. — Both teams continue to trade blows, but with time ticking down, the Warriors look like they're going to cruise to a win here. Davis still hasn't returned to the court, and it sounds like he may be dealing with some dizziness and vision difficulties. He'll probably be sidelined for the rest of the game.

12:19 a.m. — Steph with a big shot! A triple from the corner extends the lead back to 14 points! It's Warriors 109, Lakers 95 as we enter the final minutes.

12:16 a.m. — Draymond gets the crowd on its feet with a nice jumper, but Austin Reaves answers at the other end with a three from way downtown! The Lakers have chipped away and the lead is now down to just nine points with 5:25 left.
12:14 a.m. — Davis is having to head down the tunnel and towards the locker room after that injury. TNT's Chris Haynes reported he looked a little shaky on his feet and needed some help to stay upright. Let's hope he's OK.

12:09 a.m. — Gary Payton II finishes with the hoop and harm over LeBron, and the crowd is loving it. To add insult to injury for the Lakers, Anthony Davis appeared to take an elbow to the face on the other end. He appears to be in significant discomfort, and he is forced to head to the bench.

12:03 a.m. — Any momentum the Lakers may have had has quickly vanished early in the fourth quarter. Curry drains a pull-up jumper with the shot clock winding down, then Wiggins converts on a running floater in the lane to stretch the lead back to 15 points. LA is running out of time here.
11:55 p.m. END OF THIRD QUARTER — The Lakers use a mini-run to cut into the lead slightly. LeBron converts on a layup with time winding down in the quarter, and we enter the final frame with Golden State up 93-82. James appeared to land on the foot of Wiggins on that last shot, and he was grimacing a little bit as he walked away. Something to keep an eye on.

11:49 p.m. — With the third quarter winding down, the Warriors are showing no signs of letting up. Curry just blew right by three defenders for an easy layup, and once again Darvin Ham has used a timeout to try and spark something from his team.

11:41 p.m. — How about Draymond Green in this game? He's been sensational so far, racking up 18 points on 6 of 10 shooting from the field. He just converted on another layup to make it 85-70, Warriors.

11:32 p.m. — The Lakers are off to a terrible start in this half, and in the blink of an eye the Warriors have stretched their lead to 18 points! Wiggins caps off a 9-2 Golden State flurry with a one-handed putback slam and Darvin Ham takes a timeout to stop the bleeding. That could be a huge momentum swing in this game.
11:27 p.m. START OF SECOND HALF — And away we go in the third quarter. Can the Warriors hold off the Lakers to stay alive?

11:20 p.m. — Davis leads all scorers with 18 points at the half while Wiggins leads Golden State with 16. James has 17 and Curry has 12, including that buzzer-beater to make it an eleven-point game.

11:11 p.m. END OF FIRST HALF — Stephen Curry lights up Chase Center with a three to beat the buzzer! That's just his second trey of the night, but it sends the Warriors into the locker room with a 70-59 lead! They ended the half on a 16-5 run to take control of Game 5.
11:03 p.m. — We knew a run was coming from one of these teams, and this time it has come from the Warriors! Poole connects from deep, then Wiggins follows it up with a triple of his own. After a Lakers timeout, the home team leads 64-56 with less than two minutes left in the half.

10:56 p.m. — LeBron isn't cooling off, and he drives for a layup then knocks down a three moments later to tie things up at 50 apiece. Back and forth we go.

10:51 p.m. — Andrew Wiggins gets a bucket and a foul, then does it again less than 40 seconds later! That pair of three-point plays puts the Warriors back in the lead by five with seven minutes remaining in the first half.

10:45 p.m. — Now LeBron is starting to get going! He buries a pair of three-pointers to take his tally to 12 points on the night and give the Lakers the lead. After he sinks a pair of free throws, it's 41-40, LA.
10:37 p.m. END OF FIRST QUARTER — Whew, time to catch your breath! A Jordan Poole floater with six seconds left on the clock has made it 32-28 Warriors at the end of the first quarter. They could really use a good performance from him tonight. If the game continues like this, we're in for a treat.

10:35 p.m. — This game has been fast-paced and a lot of fun so far. Davis continues to fill it up and he's up to 13 points as we near the end of the first quarter. But 20-year-old Moses Moody has knocked down a pair of threes to keep Golden State's lead intact. They're up 30-26 with just over a minute left in the period.

10:27 p.m. — But here come the Lakers! Anthony Davis is getting himself involved, and his putback dunk cuts the Warriors' lead to just five points. He has nine points already in the early going.

10:20 p.m. — This has been one heck of a start by the Warriors. Gary Payton II drains a three, Draymond converts on another layup and then Stephen Curry opens his account for the night with a three from way downtown. The home team is out to a 17-5 lead less than five minutes into the game.
10:17 p.m. — Draymond Green is off to a fast start! He buries a three to get the Warriors on the board, and his layup through contact draws a foul and leads to a three-point play. Golden State leads 9-3 early.

10:12 p.m. — And there's the opening tip. We are underway in San Francisco.

10:07 p.m. — Knicks-Heat just wrapped up. meaning Warriors-Lakers is up next on TNT. Can Golden State do what New York did and stave off elimination at home in Game 5?

9:59 p.m. — Steph was doing Steph things in pregame warmups.
9:52 p.m. — No surprises from the Lakers with their starting lineup.
9:46 p.m. — For the second game in a row, Gary Payton II gets the start for Golden State.
What channel is Lakers vs. Warriors on?
Date: Wednesday, May 10
TV channel: TNT
Live streaming: Sling TV
Lakers vs. Warriors will air on TNT. Viewers can also stream the game on Sling TV.

Fans in the U.S. can watch the NBA Playoffs on Sling TV, which is now offering HALF OFF your first month! Stream Sling Orange for $20 in your first month to catch all the games on TNT, ESPN & ABC. For games on NBA TV, subscribe to Sling Orange & Sports Extra for $27.50 in your first month. Local regional blackout restrictions apply.

SIGN UP FOR SLING: English | Spanish

What time is Lakers vs. Warriors tonight?
Date: Wednesday, May 10
Time: 10 p.m. ET | 7 p.m. PT
Lakers vs. Warriors will tip off around 10 p.m. ET (7 p.m. local time) on Wednesday, May 10. The game will be played at the Chase Center in San Francisco.

Lakers vs. Warriors odds
Golden State is a 7.5-point favorite heading into Game 5.

 Warriors    Lakers

Spread -7.5 +7.5
Moneyline -350 +260
For the full market, check out BetMGM.

Lakers vs. Warriors schedule
Here is the complete schedule for the second-round series between Los Angeles and Golden State:

Date Game Time (ET) TV channel
May 2 Lakers 117, Warriors 112 10 p.m. TNT
May 4 Warriors 127, Lakers 100 9 p.m. ESPN
May 6 Lakers 127, Warriors 97 8:30 p.m. ABC
May 8 Lakers 104, Warriors 101 10 p.m. TNT
May 10 Game 5 10 p.m. TNT
May 12 Game 6* TBD ESPN
May 14 Game 7* TBD ABC

Mysterious high-energy particles could come from black hole jets

It’s three for the price of one. A trio of mysterious high-energy particles could all have the same source: active black holes embedded in galaxy clusters, researchers suggest January 22 in Nature Physics.

Scientists have been unable to figure out the origins of the three types of particles — gamma rays that give a background glow to the universe, cosmic neutrinos and ultrahigh energy cosmic rays. Each carries a huge amount of energy, from about a billion electron volts for a gamma ray to 100 billion billion electron volts for some cosmic rays.
Strangely, each particle type seems to contribute the same total amount of energy to the universe as the other two. That’s a clue that all three may be powered by the same engine, says physicist Kohta Murase of Penn State.

“We can explain the data of these three messengers with one single picture,” Murase says.

First, a black hole accelerates charged particles to extreme energies in a powerful jet (SN: 9/16/17, p. 16). These jets “are one of the most promising candidate sources of ultrahigh energy cosmic rays,” Murase says. The most energetic cosmic rays escape the jet and immediately plow through a sea of magnetized gas within the galaxy cluster.

Some rays escape the gas as well and zip towards Earth. But less energetic rays are trapped in the cluster for up to a billion years. There, they interact with the gas and create high-energy neutrinos that then escape the cluster.
Meanwhile, the cosmic rays that escaped travel through intergalactic space and interact with photons to produce the glow of gamma rays.

Murase and astrophysicist Ke Fang of the University of Maryland in College Park found that computer simulations of this scenario lined up with observations of how many cosmic rays, neutrinos and gamma rays reached Earth.

“It’s a nice piece of unification of many ideas,” says physicist Francis Halzen of the IceCube Neutrino Observatory in Antarctica, where the highest energy neutrinos have been observed.

There are other possible sources for the particles — for one, IceCube has already traced an especially high-energy neutrino to a single active black hole that may not be in a cluster (SN Online: 4/7/16). The observatory could eventually trace neutrinos back to galaxy clusters. “That’s the ultimate test,” Halzen says. “This could be tomorrow, could be God knows when.”

Here’s the key ingredient that lets a centipede’s bite take down prey

Knocking out an animal 15 times your size — no problem. A newly identified toxin in the venom of a tropical centipede helps the arthropod to overpower giant prey in about 30 seconds.

Insight into how this venom overwhelms lab mice could lead to an antidote for people who suffer excruciatingly painful, reportedly even fatal, centipede bites, an international research team reports the week of January 22 in Proceedings of the National Academy of Sciences.

In Hawaii, centipede bites account for about 400 emergency room visits a year, according to data from 2004 to 2008. The main threat there is Scolopendra subspinipes, an agile species almost as long as a human hand.
The subspecies S. subspinipes mutilans starred in studies at the Kunming Institute of Zoology in China and collaborating labs. Researchers there found a small peptide, now named “spooky toxin,” largely responsible for venom misery.

This toxin blocks a molecular channel that normally lets potassium flow through cell membranes. A huge amount of the biochemistry of staying alive involves potassium, so clogging some of what are called KCNQ channels caused mayhem in mice: slow and gasping breath, high blood pressure, frizzling nerve dysfunctions and so on. Administering the epilepsy drug retigabine opened the potassium channels and counteracted much of the toxin’s effects, raising hopes of a treatment for these bites.

New technique could help spot snooping drones

Now there’s a way to tell if a drone is spying on someone.

Researchers have devised a method to tell what a drone is recording — without having to decrypt the video data that the device streams to the pilot’s smartphone. This technique, described January 9 at arXiv.org, could help military bases detect unwanted surveillance and civilians protect their privacy as more commercial drones take to the skies.

“People have already worked on detecting [the presence of] drones, but no one had solved the problem of, ‘Is the drone actually recording something in my direction?’” says Ahmad Javaid, a cybersecurity researcher at the University of Toledo in Ohio, who was not involved in the work.
Ben Nassi, a software engineer at Ben-Gurion University of the Negev in Israel, and colleagues realized that changing the appearance of objects in a drone’s field of view influences the stream of encrypted data the drone sends to its smartphone controller. That’s because the more pixels that change from one video frame to the next, the more data bits the drone sends per second. So rapidly switching the appearance of a person or house and seeing whether those alterations correspond to higher drone-to-phone Wi-Fi traffic can reveal whether a drone is snooping.

Nassi’s team tested this idea by covering a house window with a smart film that could switch between transparent and nearly opaque, and aiming a drone with a video camera at the window from 40 meters away. Every two seconds, the researchers either flickered the smart film back and forth or left it transparent. They pointed a radio frequency scanner at the drone to intercept its outgoing Wi-Fi signals and found that its traffic spiked whenever the smart film flickered.

Story continues after graph
For people without such radio equipment, it’s also possible to intercept Wi-Fi signals with a laptop or computer with a wireless card, says Simon Birnbach, a computer scientist at the University of Oxford not involved in the work.

In another test, a drone recorded someone wearing a strand of LED lights from about 20 meters’ distance. At five-second intervals, the person either flipped the LED lights on and off, or left them off. The drone camera’s data stream peaked whenever the LED lights flickered.

This strategy to discern a drone camera’s target is “a very cool idea,” says Thomas Ristenpart, a computer scientist at Cornell University not involved in the work. But the researchers need to test whether the method works in a wider range of settings and find ways to alter a drone’s view without cumbersome equipment, he says. “I don’t think anyone is going to want to wear a [light-up] shirt on the off chance a drone may fly by.”

Javaid agrees that this prototype system must be made more user-friendly to achieve widespread use. For home security, he imagines a small device stuck to a window that flashes a light and intercepts a drone’s Wi-Fi signals whenever it detects one nearby. The device could alert the homeowner if a drone is found scoping out the house.

Still, identifying a nosy drone may not always be enough to know who’s flying it. “It’s sort of the equivalent of knowing that an unmarked van pulled up and waited outside of your house,” says Drew Davidson, a computer scientist at Tala Security, Inc. in Dallas, who was not involved in the study. “Better to know than not, but not exactly enough for the police to find a suspect.”

Stars with too much lithium may have stolen it

Something is giving small, pristine stars extra lithium. A dozen newly discovered stars contain more of the element than astronomers can explain.

Some of the newfound stars are earlier in their life cycles than stars previously found with too much lithium, researchers report in the Jan. 10 Astrophysical Journal Letters. Finding young lithium-rich stars could help explain where the extra material comes from without having to tinker with well-accepted stellar evolution rules.

The first stars in the Milky Way formed from the hydrogen, helium and small amounts of lithium that were produced in the Big Bang, so most of this ancient cohort have low lithium levels at the surface (SN: 11/14/15, p. 12). As the stars age, they usually lose even more.
Mysteriously, some aging stars have unusually high amounts of lithium. About a dozen red giant stars — the end-of-life stage for a sunlike star — have been spotted over the last few decades with extra lithium at their surfaces. It’s not enough lithium to explain a different cosmic conundrum, in which the universe overall seems to have a lot less lithium than it should (SN: 10/18/14, p. 15). But it’s enough to confuse astronomers. Red giants usually dredge up material that is light on lithium from their cores, making their surfaces look even more depleted in the element.

Finding lithium-enriched red giants “is not expected from standard models of low-mass star evolution, which is usually regarded as a relatively well-established field in astrophysics,” says astronomer Wako Aoki of the National Astronomical Observatory of Japan in Tokyo. A red giant with lots of lithium must have had a huge amount of lithium in its former life, or imply a tweak is needed to some fundamental rule of stellar evolution.

Aoki and his colleagues used the Subaru Telescope in Hawaii to find the 12 new lithium-rich stars, all about 0.8 times the mass of the sun. Five of the stars seem to be relatively early in their life cycles — a little older than regular sunlike stars but a little younger than red giants.
That suggests lithium-rich stars somehow picked up the extra lithium early in their lives, though it’s not clear from where. The stars could have stolen material from companion stars, or eaten unfortunate planets (SN: 5/19/01, p. 310). But there are reasons to think they did neither — for one thing, they don’t have an excess of any other elements.

“This is a mystery,” Aoki says.

Further complicating the picture is the possibility that the five youngish stars could be red giants after all, thanks to uncertainties in the measurements of their sizes, says astronomer Evan Kirby of Caltech. Future surveys should check stars that are definitely in the same stage of life as the sun.

Still, the new results are “tantalizing,” Kirby says. “It’s a puzzle that’s been around for almost 40 years now, and so far there’s no explanation that has satisfying observational support.”

Your phone is like a spy in your pocket

Consider everything your smartphone has done for you today. Counted your steps? Deposited a check? Transcribed notes? Navigated you somewhere new?

Smartphones make for such versatile pocket assistants because they’re equipped with a suite of sensors, including some we may never think — or even know — about, sensing, for example, light, humidity, pressure and temperature.

Because smartphones have become essential companions, those sensors probably stayed close by throughout your day: the car cup holder, your desk, the dinner table and nightstand. If you’re like the vast majority of American smartphone users, the phone’s screen may have been black, but the device was probably on the whole time.

“Sensors are finding their ways into every corner of our lives,” says Maryam Mehrnezhad, a computer scientist at Newcastle University in England. That’s a good thing when phones are using their observational dexterity to do our bidding. But the plethora of highly personal information that smartphones are privy to also makes them powerful potential spies.
Online app store Google Play has already discovered apps abusing sensor access. Google recently booted 20 apps from Android phones and its app store because the apps could — without the user’s knowledge — record with the microphone, monitor a phone’s location, take photos, and then extract the data. Stolen photos and sound bites pose obvious privacy invasions. But even seemingly innocuous sensor data can potentially broadcast sensitive information. A smartphone’s movement may reveal what users are typing or disclose their whereabouts. Even barometer readings that subtly shift with increased altitude could give away which floor of a building you’re standing on, suggests Ahmed Al-Haiqi, a security researcher at the National Energy University in Kajang, Malaysia.

These sneaky intrusions may not be happening in real life yet, but concerned researchers in academia and industry are working to head off eventual invasions. Some scientists have designed invasive apps and tested them on volunteers to shine a light on what smartphones can reveal about their owners. Other researchers are building new smartphone security systems to help protect users from myriad real and hypothetical privacy invasions, from stolen PIN codes to stalking.

Message revealed
Motion detectors within smartphones, like the accelerometer and the rotation-sensing gyroscope, could be prime tools for surreptitious data collection. They’re not permission protected — the phone’s user doesn’t have to give a newly installed app permission to access those sensors. So motion detectors are fair game for any app downloaded onto a device, and “lots of vastly different aspects of the environment are imprinted on those signals,” says Mani Srivastava, an engineer at UCLA.

For instance, touching different regions of a screen makes the phone tilt and shift just a tiny bit, but in ways that the phone’s motion sensors pick up, Mehrnezhad and colleagues demonstrated in a study reported online April 2017 in the International Journal of Information Security. These sensors’ data may “look like nonsense” to the human eye, says Al-Haiqi, but sophisticated computer programs can discern patterns in the mess and match segments of motion data to taps on various areas of the screen.

For the most part, these computer programs are machine-learning algorithms, Al-Haiqi says. Researchers train them to recognize keystrokes by feeding the programs a bunch of motion sensor data labeled with the key tap that produces particular movement. A pair of researchers built TouchLogger, an app that collects orientation sensor data and uses the data to deduce taps on smartphones’ number keyboards. In a test on HTC phones, reported in 2011 in San Francisco at the USENIX Workshop on Hot Topics in Security, TouchLogger discerned more than 70 percent of key taps correctly.

Since then, a spate of similar studies have come out, with scientists writing code to infer keystrokes on number and letter keyboards on different kinds of phones. In 2016 in Pervasive and Mobile Computing, Al-Haiqi and colleagues reviewed these studies and concluded that only a snoop’s imagination limits the ways motion data could be translated into key taps. Those keystrokes could divulge everything from the password entered on a banking app to the contents of an e-mail or text message.

Story continues below graphs
A more recent application used a whole fleet of smartphone sensors — including the gyroscope, accelerometer, light sensor and magnetism-measuring magnetometer — to guess PINs. The app analyzed a phone’s movement and how, during typing, the user’s finger blocked the light sensor. When tested on a pool of 50 PIN numbers, the app could discern keystrokes with 99.5 percent accuracy, the researchers reported on the Cryptology ePrint Archive in December.

Other researchers have paired motion data with mic recordings, which can pick up the soft sound of a fingertip tapping a screen. One group designed a malicious app that could masquerade as a simple note-taking tool. When the user tapped on the app’s keyboard, the app covertly recorded both the key input and the simultaneous microphone and gyroscope readings to learn the sound and feel of each keystroke.

The app could even listen in the background when the user entered sensitive info on other apps. When tested on Samsung and HTC phones, the app, presented in the Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless and Mobile Networks, inferred the keystrokes of 100 four-digit PINs with 94 percent accuracy.

Al-Haiqi points out, however, that success rates are mostly from tests of keystroke-deciphering techniques in controlled settings — assuming that users hold their phones a certain way or sit down while typing. How these info-extracting programs fare in a wider range of circumstances remains to be seen. But the answer to whether motion and other sensors would open the door for new privacy invasions is “an obvious yes,” he says.

Tagalong
Motion sensors can also help map a person’s travels, like a subway or bus ride. A trip produces an undercurrent of motion data that’s discernible from shorter-lived, jerkier movements like a phone being pulled from a pocket. Researchers designed an app, described in 2017 in IEEE Transactions on Information Forensics and Security, to extract the data signatures of various subway routes from accelerometer readings.

In experiments with Samsung smartphones on the subway in Nanjing, China, this tracking app picked out which segments of the subway system a user was riding with at least 59, 81 and 88 percent accuracy — improving as the stretches expanded from three to five to seven stations long. Someone who can trace a user’s subway movements might figure out where the traveler lives and works, what shops or bars the person frequents, a daily schedule, or even — if the app is tracking multiple people — who the user meets at various places.
Accelerometer data can also plot driving routes, as described at the 2012 IEEE International Conference on Communication Systems and Networks in Bangalore, India. Other sensors can be used to track people in more confined spaces: One team synced a smartphone mic and portable speaker to create an on-the-fly sonar system to map movements throughout a house. The team reported the work in the September 2017 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“Fortunately there is not anything like [these sensor spying techniques] in real life that we’ve seen yet,” says Selcuk Uluagac, an electrical and computer engineer at Florida International University in Miami. “But this doesn’t mean there isn’t a clear danger out there that we should be protecting ourselves against.”

That’s because the kinds of algorithms that researchers have employed to comb sensor data are getting more advanced and user-friendly all the time, Mehrnezhad says. It’s not just people with Ph.D.s who can design the kinds of privacy invasions that researchers are trying to raise awareness about. Even app developers who don’t understand the inner workings of machine-learning algorithms can easily get this kind of code online to build sensor-sniffing programs.

What’s more, smartphone sensors don’t just provide snooping opportunities for individual cybercrooks who peddle info-stealing software. Legitimate apps often harvest info, such as search engine and app download history, to sell to advertising companies and other third parties. Those third parties could use the information to learn about aspects of a user’s life that the person doesn’t necessarily want to share.

Take a health insurance company. “You may not like them to know if you are a lazy person or you are an active person,” Mehrnezhad says. “Through these motion sensors, which are reporting the amount of activity you’re doing every day, they could easily identify what type of user you are.”

Sensor safeguards
Since it’s only getting easier for an untrusted third party to make private inferences from sensor data, researchers are devising ways to give people more control over what information apps can siphon off of their devices. Some safeguards could appear as standalone apps, whereas others are tools that could be built into future operating system updates.

Uluagac and colleagues proposed a system called 6thSense, which monitors a phone’s sensor activity and alerts its owner to unusual behavior, in Vancouver at the August 2017 USENIX Security Symposium. The user trains this system to recognize the phone’s normal sensor behavior during everyday tasks like calling, Web browsing and driving. Then, 6thSense continually checks the phone’s sensor activity against these learned behaviors.

If someday the program spots something unusual — like the motion sensors reaping data when a user is just sitting and texting — 6thSense alerts the user. Then the user can check if a recently downloaded app is responsible for this suspicious activity and delete the app from the phone.

Uluagac’s team recently tested a prototype of the system: Fifty users trained Samsung smartphones with 6thSense to recognize their typical sensor activity. When the researchers fed the 6thSense system examples of benign data from daily activities mixed in with segments of malicious sensor operations, 6thSense picked out the problematic bits with over 96 percent accuracy.
For people who want more active control over their data, Supriyo Chakraborty, a privacy and security researcher at IBM in Yorktown Heights, N.Y., and colleagues devised DEEProtect, a system that blunts apps’ abilities to draw conclusions about certain user activity from sensor data. People could use DEEProtect, described in a paper posted online at arXiv.org in February 2017, to specify preferences about what apps should be allowed to do with sensor data. For example, someone may want an app to transcribe speech but not identify the speaker.

DEEProtect intercepts whatever raw sensor data an app tries to access and strips that data down to only the features needed to make user-approved inferences. For speech-to-text translation, the phone typically needs sound frequencies and the probabilities of particular words following each other in a sentence.

But sound frequencies could also help a spying app deduce a speaker’s identity. So DEEProtect distorts the dataset before releasing it to the app, leaving information on word orders alone, since that has little or no bearing on speaker identity. Users can control how much DEEProtect changes the data; more distortion begets more privacy but also degrades app functions.

In another approach, Giuseppe Petracca, a computer scientist and engineer at Penn State, and colleagues are trying to protect users from accidentally granting sensor access to deceitful apps, with a security system called AWare.

Apps have to get user permission upon first installation or first use to access certain sensors like the mic and camera. But people can be cavalier about granting those blanket authorizations, Uluagac says. “People blindly give permission to say, ‘Hey, you can use the camera, you can use the microphone.’ But they don’t really know how the apps are using these sensors.”

Instead of asking permission when a new app is installed, AWare would request user permission for an app to access a certain sensor the first time a user provided a certain input, like pressing a camera button. On top of that, the AWare system memorizes the state of the phone when the user grants that initial permission — the exact appearance of the screen, sensors requested and other information. That way, AWare can tell users if the app later attempts to trick them into granting unintended permissions.

For instance, Petracca and colleagues imagine a crafty data-stealing app that asks for camera access when the user first pushes a camera button, but then also tries to access the mic when the user later pushes that same button. The AWare system, also presented at the 2017 USENIX Security Symposium, would realize the mic access wasn’t part of the initial deal, and would ask the user again if he or she would like to grant this additional permission.

Petracca and colleagues found that people using Nexus smartphones equipped with AWare avoided unwanted authorizations about 93 percent of the time, compared with 9 percent among people using smartphones with typical first-use or install-time permission policies.

The price of privacy
The Android security team at Google is also trying to mitigate the privacy risks posed by app sensor data collection. Android security engineer Rene Mayrhofer and colleagues are keeping tabs on the latest security studies coming out of academia, Mayrhofer says.

But just because someone has built and successfully tested a prototype of a new smartphone security system doesn’t mean it will show up in future operating system updates. Android hasn’t incorporated proposed sensor safeguards because the security team is still looking for a protocol that strikes the right balance between restricting access for nefarious apps and not stunting the functions of trustworthy programs, Mayrhofer explains.

“The whole [app] ecosystem is so big, and there are so many different apps out there that have a totally legitimate purpose,” he adds. Any kind of new security system that curbs apps’ sensor access presents “a real risk of breaking” legitimate apps.

Tech companies may also be reluctant to adopt additional security measures because these extra protections can come at the cost of user friendliness, like AWare’s additional permissions pop-ups. There’s an inherent trade-off between security and convenience, UCLA’s Srivastava says. “You’re never going to have this magical sensor shield [that] gives you this perfect balance of privacy and utility.”

But as sensors get more pervasive and powerful, and algorithms for analyzing the data become more astute, even smartphone vendors may eventually concede that the current sensor protections aren’t cutting it. “It’s like cat and mouse,” Al-Haiqi says. “Attacks will improve, solutions will improve. Attacks will improve, solutions will improve.”

The game will continue, Chakraborty agrees. “I don’t think we’ll get to a place where we can declare a winner and go home.”

New device can transmit underwater sound to air

Don’t expect to play a game of Marco Polo by shouting from beneath the pool’s surface. No one will hear you because, normally, only about 0.1 percent of sound is transmitted from water to the air. But a new type of device might one day help.

Researchers have designed a new metamaterial — a type of material that behaves in ways conventional materials can’t — that increases sound transmission to 30 percent. The metamaterial could have applications for more than poolside play. A future version might be used to detect noisy marine life or listen in on sonar use, say applied physicist Oliver Wright of Hokkaido University in Sapporo, Japan, and a team at Yonsei University in Seoul, South Korea, who describe the metamaterial in a paper accepted to Physical Review Letters.
Currently, detection of underwater sounds happens with hydrophones, which have to be underwater. But what if you wanted to listen in from the surface?

Enter the new device. It’s a small cylinder with a weighted rubber membrane stretched across a metal frame that floats atop the water surface. When underwater sound waves hit the device, its frame and membrane vibrate at finely tuned frequencies to help sound transmit into the air.

“A ‘hard’ surface like a table or water reflects almost 100 percent of sound,” says Wright. “We want to try to mitigate that by introducing an intermediary structure.”
Both water and air resist the flow of sound, a property known as acoustic impedance. Because of its density, water’s acoustic impedance is 3,600 times that of air. The greater the mismatch, the more sound is reflected at a boundary.
Adding a layer of material one-fourth the thickness of an incoming wave’s wavelength can reduce the amount of reflection. This is the principle at work behind anti-reflective coatings applied to lenses of cameras and glasses. While optical light has a wavelength in the hundreds of nanometers, necessitating a thin coating only a few atoms thick, audible sound waves can be meters long.

Even though it’s only one-hundredth the thickness of the sound’s wavelength, instead of the conventional one-fourth, the metamaterial still transmits sound.

“It’s a tour de force of experimental demonstration,” says Oleg Godin, a physicist at the Naval Postgraduate School in Monterey, Calif., who was not involved with the research. “But I’m less impressed by the suggestions and implications about its uses. It’s wishful thinking.”

One major problem that the researchers would have to overcome is the device’s inability to transmit sound that hits the surface an angle. In the lab, the device is tested in a tube — effectively a one-direction environment. But on the vast surface of a lake or ocean, the device would be limited to transmitting sounds from the small area directly below it. Additionally, the metamaterial is limited to transmitting a narrow band of frequencies. Noise outside that range reflects off the water’s surface as usual.

Still, the scientists are optimistic about the next steps, and even propose that a sheet of these devices could work in concert.