11/29/2016

HP Enterprise demonstrates next-generation computing prototype as The Machine comes together


Two and a half years ago, HP (now HP Enterprise after the company split) revealed a new, revolutionary computer architecture it dubbed “The Machine.” This new computing platform would combine cutting-edge and still unproven technologies like memristors, silicon photonics, and truly massive amounts of addressable memory. HPE was forced to dial back some of its ambition when it proved too difficult to bring the entire project to market all at the same time, but it refused to give up on the idea of what it calls “Memory-Driven Computing.”

Today, HPE is announcing that it has demonstrated the major components of this new type of system, albeit in prototype form. The Machine as currently constituted consists of:
  • Compute nodes accessing a shared pool of Fabric-Attached Memory
  • An optimized Linux-based operating system (OS) running on a customized System on a Chip (SOC)
  • Photonics/Optical communication links, including the new X1 photonics module, are online and operational
  • New software programming tools designed to take advantage of abundant persistent memory.
DSC_7911

The Machine (well, one “blade” of it, anyway). The DIMM slots are filled with NAND Flash; HPE wants to transition to lower-latency memory that competes directly with DRAM in the next few years.

HPE has previously shown off some of these components, like its X1 silicon photonics module. The X1 module is capable of transferring data at up to 1.2Tbps (150GB/s of bandwidth) over a 30-50 meter distance. HPE has also demonstrated silicon photonics technology that can move data up to 50 kilometers (30 miles) at 200Gbps. HPE’s major goal with The Machine is to create a system in which non-volatile memory (NVM)serves as a true DRAM replacement, offering at least equivalent latency with drastically reduced power consumption and low-latency optical interconnects.

Customers will still have the option to deploy The Machine as a conventional system but HPE’s plan is to offer huge pools of NVM that can be shared across many SoCs. While the diagrams below only refer to CPUs, there’s no reason this model couldn’t be extended to other types of accelerators — vector processors like Intel’s Xeon Phi or GPUs from AMD and Nvidia could at least theoretically be paired with HPE’s new architecture. The following slideshow steps through some of HPE’s design elements, and the benefits it expects to offer with The Machine compared to traditional systems. Images can be clicked to enlarge them in a new window.


When HPE announced that it would re-purpose The Machine’s design around conventional technology in mid-2015, it seemed to imply that the project’s groundbreaking potential had been largely buried beneath financial realities and the slow pace of technological innovation that characterizes modern semiconductor development. Today, I have to acknowledge that this dismissal was premature. HPE may not be planning to commercialize memristor technology in the near term, but The Machine is more than a conventional server with a huge amount of RAM, and the company’s work on low-latency non-volatile memory and optical interconnects could have significant implications for HPC and Big Data problems for years to come. The Machine won’t debut as a single system with all-new technologies but should transition to new memory standards as they become available. This might make it a touch less exciting on launch, but should lay the groundwork for a long-term virtuous cycle of improved performance and reduced power consumption, up to and including (eventually) exascale-class deployments.

No Man’s Sky ‘Foundation’ update is live, but don’t expect a whole new game



It would be fair to say No Man’s Sky has been one of the biggest disappointments in modern gaming history. After two years of extraordinary hype, the game launched earlier this year to mostly negative reviews. Many of the features that were promised by developer Hello Games were missing, and the final game was scarcely a match for the awe-inspiring trailers, and the company’s silence after months of frantic communication went over poorly with much of the community, to the point that hackers took over the firm’s social media accounts (briefly) to apologize for the game. Hello Games is trying to redeem itself after months of silence with the “Foundation” patch, which has just launched on PC and PS4.

I’ve been playing around with the Foundation patch (AKA v1.1) to see if it fundamentally changes the experience. The headlining change in this patch is the option to build bases. Yes, in a game of exploration, you can choose a home planet. The resources you gather from around the galaxy can be used to build new sections of the base, eventually leading to complex, labyrinthine corridors. Your bases can be used to grow resources, research new technologies, and house your alien helpers.
If setting up shop on a planet doesn’t appeal to you, there are expensive freighter ships available that are essentially flying bases. These ships can move between star systems, taking all your items, grow chambers, and alien helpers along for the ride. In addition to having more space to keep things in your base and freighter, there’s a quick-access inventory system that makes it easier to find items. This is much appreciated as the clunky inventory system at launch was one of the most tedious gameplay features.

The most serious issue with No Man’s Sky at launch was just now uniform everything was. If you’d seen one planet, you’d seen them all. The random nature of the game was so random that planets lacked large scale distinctive features like mountains, oceans, and deserts. Hello Games says that changes to the landscape algorithm allows for more aesthetically pleasing planets and biome-specific resources. I don’t know that I’m seeing much difference yet, but the graphical tweaks like motion blur and antialiasing do look nicer. One big drawback here; Hello Games had to regenerate all planets in the game. Continuing a previous game might mean you character could be in a different environment than before the update.

No Man’s Sky has also been split into three game modes with the Foundation update. There’s the original game mode, creative mode, and survival mode. Creative mode grants you unlimited health and resources to build whatever you want—it’s basically sandbox mode. Survival mode is the opposite; planets are more dangerous and resources are more rare. You will die a lot in survival mode.
Admittedly, the Foundation update brings a lot of new features, but this is still No Man’s Sky. If you didn’t like it before, you probably still won’t. There’s very little story to speak of and the animals still look fairly ridiculous much of the time. If you were on the fence, some of this might entice you to give it another shot. Players at least seem to be giving No Man’s Sky 1.1 a chance. After dropping dramatically after launch, No Man’s Sky is back in the top 50 most played games on steam with a peak of about 7,700 players on Monday.

Cold, dead fingers: Will motorists give up in-car cellphone access without a fight?



Highway deaths are up in the United States, as is the use of smartphone apps. The National Highway Traffic Safety Administration sees a link and last week issued guidelines – not mandates – that would have the major phone-OS developers figure out a way to severely limit the functionality of smartphones while a car is moving. But only the driver’s phone would be crippled, not the passengers’ phone, in NHTSA’s proposal.

Calling features would still be enabled, via handsfree link. Incoming texts would be read aloud and a speech-to-text response could be generated. Music playback would also be available. That’s about it.

Spike in fatalities unseen since the 1960s, NHTSA says

NHTSA says the 7% increase in fatalities in 2015 is unprecedented in the past half century and preliminary data on 2016 suggests the death toll rose a further 10% this year. The last time it was that high was the 1960s, when fatalities year over year increased by 7% to 9% in four of the 10 years. Since then, the death rate (total fatalities) has been flat or gone down in 26 of 46 years.

Car fatalties FARS 1921 on.xlsx

Traffic deaths each year per 100,000 population and per 100 million vehicle miles traveled, left axis, and total motor vehicle deaths per year (car, truck, bus, motorcycle, bicyclist, pedestrian). Source: NHTSA FARS database.

How big is the problem? Depends on what stats you use

The concern by safety officials comes from a spike in traffic fatalities. How big of a spike depends on which data gets publicized. The chart above is from NHTSA’s Fatality Analysis Reporting System (FARS). Here are three ways to look at the data.
  • Raw fatalities. This is how many people died in a year. Since the US population grows each year, using this data set often provides the most dramatic increases. The death toll in the US in 1970 (the start of the safety era with seat belts mandatory since 1968) was 52,627 when the population was 205 million; in 2015 the US population was 321 million, 57% more, but the fatality rate was a third lower, 35,092 in 2015. So, historically the death rate is down. But in 2015 the death toll was 2,417 more than in 2014, or 7% more. The five-year increase, 2015 vs. 2010, was 6%.
  •  
  • Deaths per 100,000 per year. This accounts, somewhat, for our growing population over time. It’s a measure that’s understandable. If you live in a town of 100,000, on average 11 people died last year. In 1970, it was 26 per 100,000. From 2014 to 2015, deaths per 100,000 went up 7%, too, same as the raw fatalities rate. The five-year increase was 2% (2015 vs. 2010). Deaths-per-100K peaked in the 1930s at almost 30 per 100,000 people. It drops in bad economic times, during World War II (driving restrictions), and during gasoline shortages (mid-1970s, early 1980s), but it’s generally heading down.
  •  
  • Deaths per 100 million vehicle miles traveled (VMT). This statistic does the best job of factoring in fluctuations in the economy and fuel shortages, as well as safety improvements in cars and roads. The line slopes steadily downward, from almost 25 deaths per 100 million VMT in 1921 (the first year of more detailed auto fatalities record-keeping)  to just over 1 last year. In 2014 it was the lowest in history, 1.08 deaths per 100 million VMT. In 2015 it was 1.11, an increase of 3%, the only increase in the past decade. That’s noticeable and worrisome, but 3% isn’t 7%.
Even as auto fatalities have gone up 3% or 7% in the past year, you’re still pretty safe in a car. Multiply 11 fatalities per 100,000 people times the 80 years a person is in a car as a passenger or driver (or out walking or biking), you get 880 fatalities per 100,000, or slightly less than a 1% chance of dying in a motor vehicle accident over your lifetime. That’s low but still high enough that you’ll likely know of a friend or family member killed in an accident. It’s a leading cause, (for years the leading cause), of deaths in children and adults under 25. For better or worse, suicides, homicides and drug deaths are now challenging auto accidents among those 25 and under.

What are the odds a cellphone kill switch is adopted?

By asking cellphone-makers and OS providers to voluntarily restrict what services the phone provides in a moving car, the feds look a bit less like spoilsports. There is likely to be pushback. Whenever drivers are asked to self-rate their driving, the majority say they’re above average, which isn’t possible statistically, and they might bristle at seeing their quote freedoms curtailed.  (See below.)

There are reports that Apple in 2008 applied for a patent on technology that would lock out phones that were in motion; the technology supposedly could determine which phone was the driver’s. The patent was cited in a 2014 lawsuit over an alleged distracted-driving fatality.

CellControl, a Baton Rouge, LA, company, offers DriveID, a dash-mounted $129 hardware-software product that can limit the driver’s use of his or her phone. DriveID is managed by parents in a family situation or by a fleet manager in a commercial setting. Pokemon Go has been cited as a factor in some car crashes, so much so that the current version has a lockout that disables the app when it’s being played at greater than walking speeds. The change was hastened by a video showing a Pokemon Go motorist sideswiping a police car.

It’s possible phone-makers will use a limit-your-phone-access mandate as a reason to keep from adding competing apps into the handful now enabled under Apple CarPlay and Android Auto. Specifically: You wouldn’t see Apple Maps on an Android phone and you wouldn’t see Google Maps or Waze on an Apple phone running CarPlay.

Americans didn’t like enforced ignition interlocks

History shows motorists reacted badly to heavy-handed behavior modification. In the mid-1970s, cars were equipped with ignition interlocks that wouldn’t allow the car to be started until the driver and front seat passenger had buckled their seat belts. This was supported by the auto industry in part to stave off mandatory airbags, which at the time were seen as a substitute for seat belts. The buzzers were loud, sometimes electrical gremlins kept the cars from starting, and on many cars it was easy to defeat the interlock. Automakers pulled back within a couple years.

Circa 1980 some automakers, again trying to keep airbags out of the picture, installed automatic seat belts with complex (read: not always reliable) mechanical arms that draped the seatbelt across the driver’s and passenger’s torsos. If driver or passenger opened the door while backing up, the belt retracted and wrapped itself around the occupant’s neck. This, too, died an early death.

Is there a sensible compromise?

Given how little Americans like to be told what to do when driving, they’re not going to like limits on their freedom to use their phones in ways ranging from distracting or potentially hazardous to downright stupid.

Driver-assist technology can help make the occupants of a car safer should the driver be distracted while creating or reading a long text. A car equipped with lane departure warning, adaptive cruise control, pedestrian detection, and forward emergency braking will alert the driver if the car ahead suddenly slows, or if the car is drifting out of lane. For someone who’s texting, that is either a safety feature … or it’s an enabler for someone to write longer texts before they get in trouble.

While texting is the main concern, better voice recognition and one-shot destination entry will make it easier to enter an address without stopping. Right now, every phone has that feature, but only some embedded navigation systems. Already, most automakers lock out the LCD display’s ability to tap in the address. Automakers so far have resisted implementing technologies that could recognize when a passenger, not the driver, is entering address information. Mercedes-Benz offers a unique technology, SplitView, that shows the driver and passenger different views of the same center stack screen. Alternating pixels point left and right.

If NHTSA moves to reign in the use of smartphones in cars, there’s likely to be pushback. Even if distracted driving is the cause of the current 3%-7% increase in fatalities, drivers still see the overall risk to themselves as minor. And there’s conflicting research as to what constitutes distraction. Studies show that just talking on a hands-free phone takes up some of the driver’s attention. Tuning the car radio has always been distracting. Some research says talking to a passenger can be distracting. In the past, two of the biggest distractions were the driver dropping a lit cigarette in his or her lap, or a bee flying in the car. Those are small distractions now with the decrease in smoking and air conditioning that allows for windows-closed driving.

Homeopathic solutions now have to be labeled to disclose that there’s no science behind them



The FTC is playing whack-a-mole with pseudoscience again, and this time it’s targeting homeopathy. Their latest comments contend (PDF) that the standard disclaimer isn’t enough to dissuade consumers from buying this crap, so now not only do homeopathic products have to carry the standard disclaimer, they also have to say there’s no science behind them.

I support the idea that it’s your body, so do with it what you want, and if taking homeopathic sugar pills makes you feel better, sure, keep taking them. But the only reason I can’t make a blanket statement that homeopathy provides no benefit whatsoever is the placebo effect. To paraphrase Tim Minchin’s memorable, NSFW rant on alternative medicine, Storm, without fail or exception every kind of alternative medicine has either not been proved to work, or been proved not to work. Do you know what alternative medicine is called that’s been proven to work? Medicine. Homeopathy must not be confused for medicine.

The FTC agrees, and they’re really emphatic about it. Homeopathic products don’t work: because further dilution is supposed to increase a homeopathic remedy’s strength, many homeopathic products contain no detectable traces of their title ingredients. This hilarity is a good thing for consumers, because homeopathy’s other fundamental principle of “like cures like” means the solutions could be deadly if they actually contained any of their title ingredients: among a great many other products, homeopaths prescribe nux vomica (strychnine), arsenica album (arsenic) and even teething products containing extract of belladonna, which it turns out is not great for kids. In reality, homeopathic products tend to be recalled when it’s discovered that they contain their title ingredient. Three separate times in their recent press release, the FTC maintains that the standard “has not been proven to treat or cure any condition” disclaimer is unlikely to be enough to adequately convey how completely valueless homeopathic solutions are. Homeopathy has been rejected by science, medicine and now by the FTC.

Debunking homeopathy has already been done so beautifully that nobody needs me to do it here. If you haven’t yet had a headache today, go search for “succussion” and that’ll fix that right up for ya. I present to you Munroe’s razor:

Not to be confused with 'making money selling this stuff to OTHER people who think it works', which corporate accountants and actuaries have zero problems with. 
Image: Randall Munroe, xkcd.com/808/

This is not government overreach. This is the government intervening to put the kibosh on outright fraud. Products which definitionally cannot do what they claim to do and contain none of what they claim to contain are fraudulent. Nobody benefits from their sale but the fraudster. This is the FTC attempting to intervene in the one way it can, because the next step up would be banning homeopathic products outright and we don’t have a reason to go there yet. Consumers can buy what they want, even if it’s terribly advised. The principle of caveat emptor puts the responsibility on the buyer to not go carting off pallets of triple-distilled homeopathic 40C oleus serpentius.

New teardown reveals Surface Studio packs ARM CPU core, upgradable storage



Microsoft’s new Surface Studio is an interesting foray into high-end premium hardware, but it’s clearly not intended as a user-expandable system. It’s always interesting to see how companies balance slim, cutting-edge hardware against basic repairability, and Microsoft’s track record in this area is pretty mixed. iFixit’s recent teardown of the Surface Studio found some expected downsides, but a few intriguing details and user-friendly options as well.

It’s not too difficult to open the bottom of the system, but there are some wire leads to watch and you’ll have to remove the midframe to access the guts of the machine. The cooling solution appears quite robust — Microsoft is using a multi-heatpipe system with dual fans, one for the CPU and one for the GPU. The larger fans are used for the GPU, while the smaller serves the CPU (the GPU also exhausts directly, while the CPU fan pushes air into a plastic channel).


Studio-Coolers

The dual heatpipe. Image by iFixit

Microsoft is using a hybrid cache solution, but earlier reports that implied the system used a Seagate SSHD were apparently invalid. There are two general ways to create a hybrid SSD platform: You can use a small amount of SSD storage to accelerate both reads and writes (we’ve covered this type of solution in previous reviews), or you can deploy an SSHD. SSHD’s contain both NAND flash and conventional magnetic media in the same 2.5-inch enclosure, and they typically have different caching strategies and less total NAND than hybrid solutions that pair separate NAND and magnetic media storage pools. iFixit’s Surface Studio has 64GB of storage total and could theoretically be expanded to 128GB if you have an appropriate workbench and the necessary expertise.

Unfortunately, anyone who dreamt of upgrading the onboard RAM or other components will be disappointed — all of the onboard DRAM is soldered to the motherboard, as is the CPU. There’s no upgrading the GPU, either; the GTX 965M / 980M isn’t mounted to an MXM-compatible PCB. It might one day be possible to perform a full mainboard swap by buying a Studio on eBay and tearing it down for parts, but that’s obviously not a cost-effective solution for most people and won’t be a realistic option for several years, if at all.

Studio-Mainboard

The Surface Studio’s RAM, GPU, and CPU are all mounted on the same piece of silicon. Image by iFixit.
The display can be removed and replaced, but only by cutting through a glue layer and removing the rear hinge. iFixit rates this as a fairly straightforward repair if all you need to do is replace a cracked glass screen. The display includes a second motherboard with some Microsoft-branded chips, some NOR flash, and a 32-bit ARM Cortex-M7 processor. We don’t talk much about ARM’s embedded controllers, but the M7 is supposed to be a significant improvement over ARM’s previous embedded chips, as shown below:

Cortex-M-series-performance-graph
The Cortex-M7 is used as part of Microsoft’s PixelSense display, and it’s not the first time we’ve seen ARM controllers popping up in PC hardware. For years, conventional wisdom predicted that x86 and ARM were headed for an inevitable confrontation across the smartphone, tablet, and even the laptop market. That seems much less likely to happen now, but we’re seeing some interesting trends from companies that integrate ARM and x86 silicon side-by-side, whether that’s as an on-chip security solution (AMD’s TrustZone) or using an embedded ARM controller to handle specific tasks, as Microsoft has done.

Overall, the Surface Studio wins a 5/10 score for its repairability. The 8GB limit on the lower-end Surface Studio (to the extent that any $3000+ system can be called “lower end”) could give some buyers pause, given that there’s no way to upgrade the platform. Conventional desktop users can easily get by with this, but the content creation professionals that Microsoft is targeting really could’ve used a 16GB baseline configuration.

10/05/2016

Replacement Samsung Galaxy Note 7 catches fire on airplane


Samsung’s Galaxy Note 7 was supposed to be a cutting-edge halo device with an aggressive set of new features. Samsung’s original plan was to launch the device before Apple could field the iPhone 7, then reap the rewards of positive press while Apple grappled with unhappy customers who don’t want to shell out $160 for plastic earrings and prefer their devices with the ubiquitous headphone jack.

Instead, the Note 7 is mostly known for being a fire hazard. To Samsung’s credit, the company has been extremely proactive about dealing with the problem, and has repeatedly pushed its customer base to return the device for a refund or replacement. Unfortunately, the company’s battery problems might not be over.
Southwest Airlines Flight 944 from Louisville to Baltimore had to be evacuated this morning after passenger Brian Green’s Samsung Galaxy Note 7 caught fire. The Verge spoke to Green, who confirmed that his device showed the characteristic black box that distinguishes between a replacement Note 7 and an original version, and Green stated his battery icon had been green. That’s how Samsung has visually distinguished the new, safe Note 7s from the defective variants. The phone began smoking after Green turned it off and placed it in his pocket at the request of the flight crew. While the plane was still at the gate and could safely be evacuated, reports suggest that the phone (which was left on the plane for obvious reasons) burned through the carpet and into the subfloor of the aircraft.

SamsungNote7
Brian Green’s Galaxy Note 7 box, with the black box. (Image from The Verge)

Right now, it’s not clear if this failure was a one-off or a sign that Samsung’s battery problem isn’t as fixed as the company thought it was. At this point, either is possible. The fact is, a certain percentage of smartphones will suffer catastrophic battery failure. Even so, there are battery chemistries that are more-and-less susceptible to thermal runaway. It’s also possible to include safety systems that can react and stop a thermal runaway from causing a catastrophic fire, though I’m not aware of any of these being used in the space-constrained mobile environment, where manufacturers simultaneously slash thickness and push for improved battery capacities.

Right now, the explanation for Samsung’s battery failures is that one of the batteries the company uses was slightly too thick for the battery compartment. While this did not prevent the battery cover from fitting on the device, the cells of the lithium-ion battery were slightly compressed and therefore more likely to short-circuit. It’s possible that Samsung truly fixed the problem and this particular failure is unfortunately timed. It’s also possible that either some flawed batteries were mistakenly shipped in new units, or that the company’s initial recall failed to completely address the problem.

If this is a one-off for the replacement devices, Samsung shouldn’t have too much of a problem moving forward — but if more “good” Note 7’s keep catching fire, the company’s entire Note product could be tarnished for good. Earlier, we had praised Samsung’s quick response to the problem, but if it turns out they didn’t catch every instance of failure, it’ll blow up even larger than before.

9/25/2016

SpaceX blames recent rocket explosion on helium tank breach





Several weeks ago, SpaceX suffered another blow to its efforts to make private spaceflight a viable business when a Falcon 9 rocket exploded on the launchpad during a pre-flight test. The explosion resulted in the loss of the rocket and the payload, a satellite that was to be used for Facebook’s internet.org project. The firm began an investigation immediately, and has now provided an update on the cause. The explosion was reportedly a result of a breached helium containment system.

This might sound familiar, because it was also a problem with the helium storage tanks that caused the 2015 in-flight explosion that destroyed a resupply payload bound for the International Space Station. SpaceX is careful to point out that the recent anomaly had nothing to do with the 2015 explosion. In that incident, a strut that held the helium containers in place failed long before it had reached the rated tolerance levels. That caused the helium tank to rupture and lead to the breakup of the rocket.

In the September 2nd incident (which Elon musk says was not an explosion, but just a very fast fire), the helium storage system was breached in the second stage oxygen tank. Helium is used in rocket engines to maintain proper pressurization inside the tank as fuel is depleted. Because it’s stored as a supercooled liquid, a breach can lead to serious damage to the rocket.

While SpaceX knows what caused the explosion, it doesn’t yet know why. SpaceX is working with the FAA, NASA, the US Air Force, and outside industry experts to determine how the helium tank was damaged, but there’s precious little data from the vent itself. The Falcon 9 reports more than 3,000 different channels of engineering data to mission control, but from the first signs of an anomaly to complete loss of data was just 93 milliseconds. The investigation team has scoured the landscape near Launch Complex 40 to find all the debris from the rocket. It has all been cataloged, photographed, and stored in a hangar for further analysis.

Video of the anomaly (seen above) is quite dramatic, but as per regulations, no one was nearby during the fueling operation. There was substantial damage to the launchpad systems, but all the SpaceX support facilities nearby managed to get away with only minor damage. The company is still planning to launch again in November and is still building rocket components at its California facility for cargo missions and the upcoming NASA Commercial Crew Program flights. If the September 2nd incident is found to have been caused by the Falcon 9 design, SpaceX will be able to make changes to the rocket before getting back to the launchpad.

9/23/2016

Human skeleton found on shipwreck that held mysterious Antikythera Mechanism


In January of 1900, deep divers off the coast of a Greek island took shelter from a storm, and found a two-thousand-year-old shipwreck strewn with bones, loot, and a mysterious artifact, half-buried in the sand. Now a team of underwater archaeologists have uncovered a partial skeleton from the same wreck, in such shockingly good condition that they’re going to attempt a DNA extraction on the remains.

Deep diving is dangerous; divers breathe a different mix of gases while they’re underwater, and nitrogen in their air tanks can cause narcosis, which Jacques Cousteau called the “rapture of the deep.” On the first exploration of the wreck, which is 150 feet down, when the first diver surfaced with reports of bodies and artifacts and even submerged horses, the captain didn’t believe a word of the story: he thought it was nitrogen narcosis making the diver spin such tales. But the diver was fine. By 1901, those deep divers had brought to the surface a hoard of buried treasure, including a mysterious clockwork artifact, corroded and crushed, with a gearwheel sticking out of it and markings that nobody understood. They called it the Antikythera device, after the island near which it was found. They fished out everything they could get off the ocean floor, every amphora and coin they could find, and called it done.

Cousteau heard of the Antikythera device shortly after the first publication about it, by bespectacled British historian Derek de Solla Price. Incredibly, the device had just been mothballed in museum storage for half a century because nobody believed the people of the shipwreck’s era could have built it. When scientists finally took an interest and started imaging it, everyone was stunned at the complexity of the mechanism. Once Cousteau heard of the device, he himself came to investigate the wreck in the 1970s, and excavated a buried tableau beyond anyone’s expectations: dated to the first century BC, Cousteau found statues, jewelry, money, weapons — and several sets of barely recognizable human remains.

Decades later, we’re doing better with diving tech. The crew doing the underwater excavation is breathing something called “trimix,” which is a cocktail of helium, nitrogen, and oxygen better suited to spending time at depth. We’ve got pressure suits, too, and hyperbaric chambers if something should go wrong. We’re still sifting through the site and finding buried artifacts, but every detail we find raises more questions. The skeleton we just found is no exception. It isn’t the only human remains from the Antikythera wreck. It’s just the best preserved, by a long shot — well enough so that Hannes Schroeder and his team are going to try to get DNA out of it.

Image: Brett Seymour, EUA/WHOI/ARGO, via Nature
The jawbone. Image: Brett Seymour, EUA/WHOI/ARGO, via Nature News

The remains consist of a partial skull with teeth, two thighbones, two arm bones, and some ribs. “It doesn’t look like bone that’s 2,000 years old,” says Schroeder, an expert in ancient-DNA analysis who’s personally working on the DNA extraction. Because the skull is in such great condition, Schroeder can finesse DNA out of the dense bits of bone behind the ear called petrous bone; it preserves DNA better than other parts of the skeleton, even the teeth. “It’s amazing you guys found that,” Schroeder says of the partial skull. “If there’s any DNA, then from what we know, it’ll be there.”

DNA from the remains could add a valuable data point to our genetic history and the movement of haplogroups through time. Who was the person these remains came from? Would he (we think it’s a he) have looked “more Greek-Italian or Near Eastern”?  How will the DNA we find inside them change our understanding of population movements through history? And why, after two thousand years underwater, are there still so many bones?

To the latter question, there exists an answer, even if it’s a little grim. The wreck site is positioned at the foot of Antikythera’s steep cliffs. The ship could have been caught in a storm and dashed against the rocks — just the kind of storm from which those divers originally tried to take shelter. Co-director of the excavation team Brandon Foley explains, “We think it was such a violent wrecking event, people got trapped below decks.” When the ship went down, the wreck was rapidly buried under the sand, and so too were the bodies.

Based on the richness of debris from the ship, and how they’re distributed, researchers think it was a large merchant ship with multiple decks, possibly toting spoils of war looted from Athens or Asia Minor. It could have been inbound as swag for a victory parade for Julius Caesar. In this era, Greek and Roman merchant ships often carried well-to-do passengers, or at least those who could pay, and sometimes slaves. British underwater archaeologist Mark Dunkley points out that a crew of a dozen or so chained-up slaves in the cargo hold would be SOL in a sinking ship. “The crew would be able to get off relatively fast. Those shackled would have no opportunity to escape.” The bones just uncovered were surrounded by pieces of corroded iron, still unidentified; the iron oxide has stained the bones amber red.

Image: Nature

As for the device, scholars and tinkerers have been poring over the fragments of the Antikythera mechanism for years, analyzing its function and gearing. CAT scanning and repeated radiographs of the fragments have told us about its purpose: the Antikythera device was an orrery, a planetarium that would predict the diurnal movements of the sun and the five known planets. It had explicit, detailed instructions on the inside covers: you can just picture a hoary Grecian geometer yelling “RTFM!” The device could also predict eclipses, and — I’m not making this up — it had bloatware a built-in feature that could also give the dates of the Greek Olympic games, which happened every four years.

A device that complex, historians agree, probably wasn’t the work of a lone innovator. It was a masterwork, easily the most technologically advanced device we’ve recovered from antiquity. It could have been the work of Hipparchus, with his mentor and probably his apprentices. Scientists are using the markings on the device to suss out the latitude at which it was meant to be used.

Schematic of the whole Antikythera mechanism, with pins and gears labeled

Schematic of the whole Antikythera mechanism, with pins and gears labeled. Via Wikipedia

Naturally, there are some enterprising folks who have taken the data from existing studies of the device, often called the world’s first analog computer, and done reconstructions trying to find the answers. The artifact itself is on display at the National Archaeological Museum in Athens, several people have done elegant working models, and there’s even a project to release CAD files for the device. But its gearing ratios present a problem: the device appears to have a “fast zone” and a “slow zone” where the gear teeth are differently spaced to account for the varying speed of the planets. The varying gear ratio could be the nuanced application of Greek geometrical theory to Babylonian astronomy, and in fact the Greeks were really into geometry at the time of the wreck, and the inscriptions inside the intricately geared device are thoroughly Babylonian. Or it could be sloppy craftsmanship that made some teeth larger than others. Forensic imaging is our best bet now.

Nobody knows who made the Antikythera device, nor how it came to be on the ship that sank. But if we can narrow down a few lineages, some information on who was where and when — if we can figure out who made it and why the gears are spaced the way they are — the DNA results from that partial skeleton stand to throw light on the whole affair. It’s amazing what you can find in the data.

9/21/2016

Surgeon plans first human head transplant in 2017



Modern medical technology has granted doctors the ability to transplant many of the body’s organs, extending the life of people suffering from chronic diseases. But what about replacing all the organs at once along with the body they are in? That’s science fiction right now, but Italian neurosurgeon Sergio Canavero (pictured, top) says he plans to do the first human head transplant next year. This isn’t the first time he’s made this claim, but now he’s got a volunteer lined up and has explained in more detail how he thinks the procedure will go.

If this sounds suspicious, there’s good reason. There’s plenty of reason to be skeptical.

It’s easy to see the appeal of a head transplant in theory. If it were possible and reasonably safe, you could cure almost any disease, except for neurological ones. You’d be replacing a person’s entire complement of organs, their immune system, their joints, and everything else that causes problems as we age. Canavero’s first volunteer, Valery Spiridonov, has appeared with Canavero several times to talk about his desire to undergo the operation. Spiridonov is 31 and suffers from a muscle-wasting disease called Werdnig-Hoffman’s. It leaves him wheelchair bound and dependent on others for basic needs. Canavero wants to put his head on a body that doesn’t have Werdnig-Hoffman’s, but finding such a body will be the first hurdle.

According to Canavero, the donor body will come from someone who is brain dead and whose organs would be considered acceptable for transplantation. Things get wild when Canavero explains the process of disposing of the old body. The patient would be cooled in order to slow damage to brain cells, then surgeons would sever the soft tissue in the neck. Tubes would be affixed to all the arteries and veins to maintain blood flow. Then, Canavero plans to use a diamond knife to sever the spinal cord.

head

Being able to surgically remove the head in an orderly fashion should allow surgeons to then reattach all the nerves and blood vessels to the new body, once that pesky donor head is removed. A special bio-compatible glue will hold the spinal cord together so it can fuse with the donor body. The patient will then be put in a drug-induced coma for four weeks while the connection between the head and body heals. It’s the reattachment process that’s the most unlikely part of all this. There’s never been a successful procedure that reattached a fully severed primate spinal cord.

Canavero says all the technology he needs is available, and estimates the procedure will take about 36 hours and require the services of 150 medical professionals. He expects a 90% chance of success, as in a 90% chance the patient is up and walking around a few months after the surgery. This is… suspiciously high for a completely new procedure.

This all still sounds like science fiction, and medical professionals are mostly skeptical of Canavero’s plan. He seems set to try, though. And who knows? Maybe it’ll work. A few years ago face transplants seemed like science fiction. Even if this does work, the process will be obscenely expensive. Plus, it will give an entire body full of transplantable organs to a single person. It’s unclear if this would be considered ethical when there are so many people waiting for transplants.

Samsung unveils next-generation 960 Pro, 960 Evo M.2 SSDs with blistering speeds, up to 2TB capacity



Samsung has been pushing the boundary of SSDs for several years — it was the first company to release a commercial 3D NAND drive, and it’s been aggressively pushing the new NVMe and PCI Express-based M.2 drive standard. Now, the company has taken another step forward with the 960 Pro. This is the successor to the 950 Pro that launched last year as Samsung’s first M.2 drive for the consumer PCI Express market. The Korean company is also launching the 960 EVO — and like its previous EVO drives, this one is based on TLC NAND.

What’s new, this time around, is the type of 3D NAND (Samsung calls it V-NAND) that the company is using. The 950 Pro relied on Samsung’s 32-layer NAND, while the 960 Pro is based on a denser, 48-layer NAND variant. More vertical NAND stacks translates directly to packing more NAND into the same area — Samsung’s 950 Evo was limited to 256GB and 512GB drives, while the 960 will be available in capacities up to 2TB. Previously, Samsung’s 32-layer V-NAND topped out at 128Gbit, while the new 960 Pro has 256Gbit chips.

960-Pro-Story

While Samsung is keeping the same PCI Express 3.0 x4 lane defined in the M.2 specification, it has built a new controller, Samsung Polaris, rather than using the UBX controller in the 950 Pro, Anandtech reports. The already-excellent performance of the 950 Pro is expected to be even higher now, with the 960 Pro 2TB offering 3.5GB/s of sequential read and 2.1GB/s of sequential write, compared with 2.5GB/s and 1.5GB/s for the 950 Pro. The theoretical sequential read speed on the 960 Pro hasn’t quite bumped into the practical limit of the PCI Express 3.0 bus, but it’s getting close.

Meanwhile, the 960 EVO may use TLC NAND, but that doesn’t mean it’ll be slow. The new drive sports a 13-42GB SLC cache to speed reads and writes (13TB at 250GB, 42GB at 1TB). Sequential read speeds of 3.2GB and write speeds of 1.9GB are nearly as fast as the full 960 Pro, and while we saw multiple problems with the old 840 EVO drives, those issues appear to have been eradicated with the shift to 3D NAND instead of older 2D planar NAND.

ThermalTrip
Image by Anandtech

The new controller reportedly uses a five-core solution (up from three cores), with one dedicated to host management and four used for NAND flash communication. Samsung has also reportedly improved their own thermal management, resulting in a drive that can hold its peak performance for up to 95 seconds, compared with 63 seconds for the original 950 Pro, as shown above.

Expect to see more drives like this coming online in the next few years, as the SSD industry collectively moves to 3D NAND flash. As far as we know, Samsung is still building its NAND on an older 40nm process — as the industry moves towards smaller nodes for 3D NAND we should see further cost improvements, though the difficulty of building deep trenches at smaller process geometries may lead to slow improvement on this front.

The 960 Pro and 960 EVO will be available in October starting at $329.99 and $129.99.

China’s first space station will fall back to Earth in 2017



China launched its first space station in 2011, and managed to successfully use it intermittently for several years longer than originally planned. Now, the Tiangong-1 module is heading for a crash landing on Earth. However, China doesn’t know exactly when it’s coming down. That has fed speculation that all communication with the module has been lost, meaning it could come down almost anywhere.

The best estimate of reentry China has given is sometime in the second half of 2017. That would indicate a slowly decaying orbit. China announced in March that it had lost telemetry and guidance control of the satellite, but it had not played host to astronauts since 2013, so there was no immediate danger. However, some astronomers worried aloud that Tiangong-1 was completely inoperable and could be dropping out of the sky at any moment.

China’s statement (delivered by the government-backed Xinhua news agency) says that the module is intact and orbiting at an altitude of 230 miles (370 kilometers). This at least implies that it knows exactly where the station is, and will be able to predict its landing closer to the event. When it hits the atmosphere, much of Tiangong-1 will break up into tiny fragments. There may still be some segments as large as 100 kilograms, which could cause real damage if it fell on a populated area. However, the odds of it actually getting close to anyone or anything important are slim. Statistically, it’s likely the debris will just impact the ocean. China says it will monitor reentry for and dangerous objects.

crew-enters-tiangong-1
Chinese crew aboard the Tiangong-1.

If China has completely lost contact with the station, at least Tiangong-1 is on the small side. At just just 18,753 pounds, it provided crews 15 cubic meters of space. The International Space Station has over 900 cubic meters of pressurized space, not including the new experimental Bigelow inflatable module. Unlike the ISS, Tiangong-1 didn’t have the facilities for constant habitation. It was originally used to test docking systems in 2011; then crews were able to return to the module for 11 days in 2012 and 14 days in 2013.

China successfully launched the Tiangong-2 station into orbit last week. It’s considerably larger than Tiangong-1, and will host two astronauts next month. They’ll stay on board for a month to do research. The Tiangong-3 station will follow this one in a few years, and will serve as a platform to test the final technologies China needs to perfect before launching a permanent orbital station in the early 2020s.

Apple A10 teardown sheds light on quad-core SoC, confirms Intel won modem contract





Every time a new Apple device ships, it’s interesting to see how manufacturing technology has advanced. This year is no exception. The Apple device teardown revealed a number of interesting details and confirmed a rumor we’d heard before — Intel did indeed win at least some of the iPhone 7’s modem business (Intel builds the modem inside the AT&T and T-Mobile devices, denoted as the A1778 and A1784). The Verizon and Sprint products (A1660 and A1661) use a Qualcomm modem.

Chipworks, which performed the teardown and analysis, doesn’t dive into the implications of a dual-sourced modem between Intel and Qualcomm, but this may have practical repercussions depending on how you intend to use the device. Because Intel’s modems don’t support CDMA, you won’t be able to take AT&T or T-Mobile devices to non-GSM networks. The Qualcomm modems, in contrast, support both GSM and CDMA, meaning they should be compatible on any carrier network across the country.

Still, this is a huge feather in Intel’s cap. The company hasn’t had great luck pushing its XMM modems into high-profile device wins — at least, none it has prominently discussed. Apple’s sheer volume should drive materially higher profits for Chipzilla’s networking division.

Revised_A10_die

Unlike the iPhone 6s, which featured dual-sourcing between Samsung and TSMC, the iPhone 7 may be a TSMC-only design. Die size on the new chip is 125mm sq, a 20% size increase over the A9’s 104.5mm sq (at TSMC, the Samsung variant was smaller). According to Chipworks, the A10 is considerably more dense than the A9 thanks to better packing on Apple’s part — a straight scale-out of the A9 would’ve left Apple with a chip nearly 150mm sq, as compared to a relatively svelte 125mm sq.

The chip is built on TSMC’s 16FFC process. The “C” stands for compact, and the new node is intended for use in mainstream and low-power markets. Compared with 16nm FF+ (second-generation FinFET), FFC reduces SRAM area, leakage, and supports ultra-low power voltage modes (down to 0.6v). The diagram above shows Chipworks estimate on where specific features are located, after Anandtech helped them narrow down potential feature locations on the “little” CPU cores.

The battery is a 1960mAh unit, compared with the 1810mAh pack used in the iPhone 6s. The batteries that failed on Samsung’s Galaxy Note 7, in contrast, are nearly twice this size. Overall, the iPhone 7 is a significant manufacturing step forward for Apple, a solid debut for TSMC’s 16FFC process node, and a major win for Intel, which can claim a significant design win for its own modem technology.

7/28/2016

Missing MH370 pilot conducted suicide flight simulations over Indian Ocean

malaysia-airlines-boeing-777-200

Last week, we covered how the search for Malaysia Flight MH370 is set to end in the near future, as the last of the 46,300 square-mile search area is due to be mapped shortly. With less than 10% of the search area left still to examine, the chances of finding the plane within its boundaries is very small. But a crucial new piece of information, not previously known, is that the plane’s captain, Zaharie Ahmad Shah, appears to have conducted a simulated suicide flight scenario that ended with an aircraft disappearing in the Indian Ocean.

The evidence for the simulation run was uncovered in March 2014, when Malaysian officials gave the FBI the hard drives Shah used as part of an “elaborate” flight simulator in his home. The FBI was able to recover six deleted data points that had been stored in Microsoft Flight Simulator X. The FBI document (as reported by New York magazine) reads:
Based on the Forensics Analysis conducted on the 5 HDDs obtained from the Flight Simulator from MH370 Pilot’s house, we found a flight path, that lead to the Southern Indian Ocean, among the numerous other flight paths charted on the Flight Simulator, that could be of interest, as contained in Table 2.
These deleted points show a flight plan that departed Kuala Lumpur, headed northwest over the Malacca Strait, then banked and headed out over the Indian Ocean. Once the fuel tanks were exhausted, the aircraft would’ve plunged into the sea. While rumors that the captain may have been involved have always run around the Internet, there are several factors to note here.

First, while Shah’s flight simulator contained records that he’d plotted such a course, the path MH370 is believed to have followed based on analysis of its satellite pings does not match the route Shah planned in simulation. In the image below, Shah’s simulated flight is in red, while the believed path of the aircraft is in yellow.

MH370-Flight

Second, Shah’s home and work life were both positive to the best knowledge of anyone around him. He had suffered no recent losses, had no history of depression, and no expressed interest in religious or political extremism. The vast majority of terrorists leave some record of their activities, even if such records only emerge posthumously. No evidence linking Shah to any religious or political extremists has ever been found. And a single deleted flight path, absent any kind of documentation, doesn’t prove anything. (I must note that I’d be mortified if I ever ran for city government and someone submitted my activities on SimCity as evidence against me. I have a known weakness for nuclear power plants, giant monsters, and earthquakes.)

Conscious or unconscious?

As Popular Mechanics discusses, one reason the search teams chose to focus where they did is because Shah’s southern suicide route does share general characteristics with the route MH370 is believed to have taken. It also complicates the situation. Virtually all of our assumptions about the aircraft’s final moments rest on the idea that it hit the water very near to the time when it lost power.

If it didn’t — if Shah was conscious and able to glide the craft — it would mean the aircraft could be almost anywhere. The problem is, our analysis of where the aircraft was depended heavily on assumptions about its altitude, trajectory, and autopilot status. A conscious pilot also raises a host of other questions. If Shah hijacked the plane, what happened to its crew? Who would have had potential access to an emergency radio and other means of signaling the ground, as well as full oxygen masks and oxygen bottles? While I don’t present this Quora post as providing guaranteed data, the general consensus seems to be that while a pilot could potentially take action to vent cabin pressure and compromise the aircraft, doing so while simultaneously incapacitating the crew and guaranteeing he himself remained conscious would have been difficult, if not impossible. If Shah took actions that resulted in his own incapacitation or death, that means the plane wouldn’t have had a pilot to land it six hours later, which means the search area should’ve still been accurate.

Knowing that Shah plotted a suicide course on a flight simulator is interesting evidence that suggests he might have taken his own life. But the complete lack of corroborating evidence means it could also be little more than coincidence. Without the plane — and possibly even with it — we’ll never know.

The Human Connectome Project zeroes in on the firmware of the brain


Is the mind an emergent property of the brain, or is it something… else? Cartesian dualism is a polarizing topic, but like many other ideas that we thought were the province of philosophers, we’re getting light shed on it from a surprising source. Researchers from the Human Connectome Project (HCP) have just released our best-ever functional map of the human brain. It’s twice as finely detailed as anything that has come before it — and it’s tiptoeing closer to settling the mind-body problem. As it turns out, the brain as computer analogy has its shortfalls.

“The brain is not like a computer that can support any operating system and run any software,” says neuroscientist David Van Essen, Principal Investigator of the Human Connectome Project. “Instead, the software — how the brain works — is intimately correlated with the brain’s structure — its hardware, so to speak. If you want to find out what the brain can do, you have to understand how it is organized and wired.”

Image: Glasser, Van Etten et al
Because different subsurface brain structures look more or less the same from the outside, neuroscientists have heretofore relied mostly on gross anatomy and unfortunate happenstance to tell us what parts of the brain did what. (Here’s looking at you, Phineas Gage and Patient H.M.) Structure and function are tightly coupled in the brain, down to the molecular level. But the brain does so many things. Missing borders between brain structures can badly compromise our ability to understand how the brain works.

The HCP has been working for nigh unto six years to shed light on this problem. Their most recent announcement is impressive: The project just doubled the spatial resolution of our best known functional map. In doing so, they also integrate several different ways of explaining differences between brain regions. The researchers report that they’ve found a total of 180 distinct areas per hemisphere, regions which are bounded by sharp changes in cortical architecture, function, connectivity, and/or topography. This development stands to change neuroscience, by opening up our understanding of the relationship between structure and function.

Using multimodal MRI data from the HCP and a semi-automated approach, the new “parcellation” of cortical function checked its predictions by comparing them with brain scans from hundreds of healthy volunteers, so that the model could divide functional regions with exquisite accuracy. The parcellation divides both the left and right cerebral hemispheres into 180 areas based on physical differences like cortical thickness, functional distinctions like which areas respond to language stimuli, and differences in the connections between functional regions. If you think of it using the metaphor of, say, Google Maps, this approach combines political maps with satellite imagery; the most important divisions are invisible from a zoomed-out perspective, but important all the same.

Like cartographers from the Age of Exploration, brain cartographers are creating a tool for others to use in exploration and discovery. Prior work on the connectome gave us a directional diagram of information flow through the brain, and a startling semantic atlas that shows where we process the meanings of certain words and abstract topics. This team of researchers hopes that their work will prove an asset to other researchers as they push back the frontier of ignorance.

“We were able to persuade Nature to put online almost 200 extra pages of detailed information on each of the 180 regions as well as all of the algorithms we used to align the brains and create the map,” Van Essen said. “We think it will serve the scientific community best if they can dive down and get these maps onto their computer screens and explore as they see fit.”

7/26/2016

TECNO Camon C9 Eye Scanner, Overrated or Not?

 iris+scanner+tecnology
Upon the release of the Camon C9, a lot of people thought the Camon C8 was the best camera phone, nonetheless, they felt the next phone in the Camon series will be better than the former.

What’s peculiarly interesting about the device is the eye scanner feature. Ideally, you need a backup passcode if you are to unlock your phone when there’s blackout.

Most people use their phones in bed when there is a blackout or when the lights are turned off, the passcode will come in handy here.
 NL_9734

The Pros of the eye scanner
• You are allowed to register five different identities on the eye scanner
• In a situation where you choose not to use the eye scanner, there is an option of using pass codes.
• The scanner makes a lovely scientific sound that can make anyone feel ‘geeky’.
Con(s)
• It can be subject to light. Sometimes you have to re-register your eye print in the dark, especially if it was registered in good light.
If the eye scanner can be subject to light, then you need the backup passcode to gain access to your phone.
 NL_0843433
For a price of N49,900 the TECNO Camon C9 is good value for money. TECNO always gets their pricing right in order to steal the show.

Multiple Galaxy Note 7 snapshots leak ahead of Samsung announcement





The Samsung Galaxy Note 5 was a fine device, though fans of the Note series were irked by the smallish battery and lack of expandable memory. Samsung didn’t even give the Note 5 a full release in Europe, leading many to question its commitment to the stylus-packing phablet form factor. Rumors are swirling as we approach the August 2nd unveiling of the Note 7 — yes, Samsung is skipping a number to bring the Note series naming in-line with the Galaxy S phones. Skipping a number seems somewhat appropriate here. The Note 7 is getting leaky ahead of the announcement, and it looks like a big change for Samsung’s flagship phablet.

The latest and best look we’ve gotten of the Note 7 comes from the Chinese social media service Weibo, by way of French tech blog NowhereElse. This international connection reveals a device that looks like a slightly blown up Galaxy S7 Edge. From the front, the phone has almost no bezel on the left and right of the screen. That’s because the Note 7 will use one of Samsung’s curved Super AMOLED displays. This will be the first time the Note series has gone with such a panel, but Samsung has seen huge interest in the Edge variants of other phones. They vastly outsell the standard phone in many markets, even with the higher price tags.

Leaks points to a 5.7-inch 1440p display on the Note 7, but with the curved edges the usable space is slightly smaller. The curve in the leaked images looks like it might be less severe than the GS7 Edge, though. Samsung has always attempted to graft some features into the software to take advantage of curved displays, but they’ve never been very useful. The curved area will probably prove even less useful when using the Note 7’s built-in stylus (which hopefully doesn’t get stuck this year). Beneath the screen in the leaked photos are Samsung’s usual physical home button (with fingerprint reader) flanked by capacitive back and overview buttons.

note 7 date

Aside from the stylus, the Note 7 is expected to include a Snapdragon 820 (or possibly 821), 4-6GB of RAM, and the same 12MP main camera as the GS7. The battery capacity is reportedly 3,600mAh, substantially higher than the 3,000mAh found in the Note 5. Rumors have long pointed to an iris scanner coming to Samsung’s phones, and again some are expecting the Note 7 to include this feature. The idea is that it would be able to look at the phone and it would unlock after identifying you via the unique patterns of your iris. That sounds nice and all, but it might not work very well in practice.

The Note 7 is expected to launch not long after the announcement on August 2nd with software based on Android 6.0. Android 7.0 Nougat is just around the corner, but that will have to wait until later for the Note 7. It will at least come with a slightly revamped version of Samsung’s TouchWiz UI layer.

Five Easy Ways To Save Your [Mobile & Computer] Data

 BUSINESS 2
Things you should do right now to conserve your data.

Worry less about exceeding your data limit by following these five simple tips for Android users:
1. Reduce the data used by your Android mobile device or computer by turning on Chrome Data Saver Mode. From compressing web pages to removing images when loading a page on a slow connection, you’re able to save up to 70% more data.

2. Offline YouTube videos and watch them as often as you like without using data or buffering each time with YouTube Offline.

3. Whether it’s your neighborhood or a weekend getaway destination, there’s a way to use Google Maps without using any data. Download an area of the world and seamlessly use Maps features like turn-by-turn navigation and access useful location information without a network connection using Google Maps Offline.

4. Identify and remove data intensive apps by going to Settings > Data Usage on your Android device. You may be surprised to see data being used by apps you hardly touch!

5. Disable auto-updating apps on your Android device by opening Google Play and tapping the hamburger icon (three horizontal lines) on the top left of the screen. Go to Settings > tap Auto-update apps > select Do not auto-update apps or Auto-update apps over Wi-Fi.

Crypto-heist threatens to tank blockchain-based future



The DAO stands for the “Distributed Autonomous Organization,” and while that could very well refer to anything from a blockchain car-share app to a hive of honey bees, this rather boring title stands for something truly remarkable: the first unmanned investment portfolio. It is a proof of concept for what many believe will be the future of finance, with software organizing and overseeing an investment strategy developed through semi-democratic input from the collected investors. It’s secured by the much-ballyhooed Ethereum platform, using a cryptocurrency called Ether as its trading currency, and at first everything seemed to be proceeding according to plan. It was a confirmation of the promise of the blockchain, and proof that the future really is near at hand!

Then, just days after that DAO’s public launch, a lone hacker managed to digitally make off with more than $50 million-worth of Ether, or roughly a third of the overall capital the DAO had raised. More than a setback, this was an existential problem: This was the one, specific thing that was supposed to be impossible under the supervision of the blockchain. Despite all the efforts detailed below, make no mistake: the DAO is dead. What’s important now is containing the damage, and stopping it from tanking Ethereum as a whole.
The blockchain-based smart city might not be far away.

blockchain smart city headReports indicate that this hack took the form of a recursion glitch, which allowed infinite repetition of the otherwise legitimate command to ‘split’ your Ether out of shared wallet that’s gearing up for investments you don’t support, and into a different DAO account — or potentially a wallet of your own. The recursion algorithm was used to call on the donating fund for this Ether, over and over, without updating the balance after each withdrawal. This allowed the target accounts to be completely drained, in maximum increments of the thief’s own investment in DAO. At the time of the hack, the attacker got control of about 3.6 million Ether, out of about eight million overall.

As you might imagine, the response from both current and potential DAO investors has been strong — so strong that many are wondering whether the DAO or even Ethereum itself might have been struck a fatal blow. Ethereum is an almost absurdly ambitious idea, and at this early stage it’s buoyed almost entirely by public and investor interest; if public opinion begins to sour, and its early wins don’t lead to better, more ambitious projects to follow, Ethereum is still very capable of folding under its own weight.
Now, the worst part: it’s possible to undo this transaction, but a large proportion of the DAO and blockchain community think the cure could end up being worse than the disease. If all the participants in the DAO agree, they can collectively implement a “hard fork” in the software, in principle forcing a new reality into existence. This isn’t quite the same as hitting rewind, since the stolen funds don’t end up back in victims’ wallets directly, but are all deposited into a publicly accessible fund where investors can withdraw the amount they lost.

blockchain linux 3None of this saves the DAO. By de-legitimizing such a huge transfer of funds, the organization knowingly cut its own throat. Other DAO’s and DAO-like entities will spring up, but they will be second shots at a previously tried and failed mission: to prove that the blockchain is both useful and safe for our most sensitive jobs and information.

We should take a moment to go over just what actually had to occur to make this “fix” possible: the users hosting the blockchain software all had to download and run the new, “forked” version of the blockchain. By the time it the change was ready to go, polling of investors had made clear their preference for reimbursement, but there was still always a chance of disaster. If some non-trivial portion of the hosts decided to keep running the old version of the blockchain, the result would be two identical versions of the software. The thief’s ability to trade or redeem his or her ill-gotten Ether for cash would then be dependent on which of these versions of the blockchain the other user or currency exchange happened to be using — which obviously introduces all kinds of cascading problems as the recipients of those sometimes-respected coins then go out and try to do more business.

Graph of Ether remaining in the withdrawal account. Source: Ether.camp
ether graphOne interesting facet of this reimbursement: a lot of people have yet to collect their funds, totaling millions of dollars of unclaimed money. Some of this could be investment by the DAO team itself, and almost certainly some of it belongs to clueless investors who remain blissfully unaware of all this, but some users have also expressed frustration with the length of the withdrawal process. Since the blockchain-based wallet containing these funds can and will outlive (has outlived) the DAO itself, an investor could withdraw their funds five or 10 years from now, and it ought to make little difference. Assuming that crypto-investment is here to stay, it will take economists a while to fully wrap their heads around just how the peculiarities of digital currency affect how it flows through society.

FBI, cybersecurity experts investigating potential Russian ties to DNC email leak

Kremlin
On Friday, WikiLeaks posted a trove of 20,000 emails procured in a hack of the Democratic National Committee. Today, the FBI announced that it was investigating the hack and the circumstances surrounding it.

The hack was first reported last month, by the Washington Post, which said hackers affiliated with the Russian government had breached DNC servers and gathered opposition research on Trump as well as access to “other material.” That report claimed the hackers so thoroughly penetrated the DNC’s servers that they gained complete access to email and chat traffic. At the time, the Post reported that the hackers might have had access to the DNC’s servers for up to a year.

The Post claimed that no personal information on any donors had been seized in the attack, but the WikiLeaks email trove proved that false. Included in the leaked emails were full names, addresses, phone numbers, passport and social security numbers, credit card payment details, and full card numbers.
Security experts from CrowdStrike claim to have uncovered forensic evidence suggesting that two competing teams of Russian hackers penetrated the DNC’s servers, and that the information leaked to WikiLeaks may have come directly from Russian intelligence. If true, this would raise significant questions about Russia’s attempted interference in US elections. The Trump campaign has been accused of adopting positions that favor Vladimir Putin, though Trump has denied all such allegations.

Thomas Rid, a professor at Kings College London, told the Post that he communicated with the hacker, Guccifer2, who leaked the DNC’s emails to WikiLeaks. “I quizzed him two times in a Twitter direct message back and forth and he very clearly indicated he gave the emails to WikiLeaks,” Rid told the Post. These early findings have been backed up by multiple additional firms: Mandiant (part of FireEye) and Fidelis have confirmed CrowdStrike’s initial analysis. The hackers that penetrated the DNC used tools, practices, and trade craft that’s been previously linked to Russian groups, including hardcoded IP addresses to command-and-control servers that have conclusively been linked to Russian military intelligence, the GRU.

The leaked documents have also been modified by users with Russian language default settings, Motherboard reports, and there were hyperlink errors in Cyrillic in the document metadata. While Guccifer2 has denied being Russian or affiliated with Russia and claimed to be Romanian, he was unable to respond colloquially and without errors when asked to explain his hacks in Romanian. Russian involvement has not been conclusively proven, but there are multiple arrows pointing in the same direction.

GRU-1
Not this Gru.

The other major issue related to the leak is the alleged favoritism shown to Hillary Clinton during the Democratic primary season. Supporters of Bernie Sanders often alleged that Debbie Wasserman Schultz, who headed the Democratic National Committee, scheduled debates at times that favored Clinton and generally worked behind the scenes to disadvantage Sanders. As of this writing, none of the emails show evidence of the voter suppression or deliberate disenfranchisement that some Sanders’ supporters allege took place, but there has been at least one email chain that discussed attempting to paint Sanders as an atheist. Wasserman Schultz has agreed to step down as a result of the leaks and will no longer speak at the Democratic National Convention.

Just to make this perfectly clear: While some have talked about this leak as a second Clinton email scandal, Hillary Clinton herself is not a direct party to this leak, nor responsible for it. The investigation into Clinton’s own email practices and the FBI decision not to indict her for said practices was an entirely separate affair.