Richard Branson (center) stands in front of the Hyperloop at Hyperloop One's test site in Nevada. Virgin Group
Hyperloop One just struck a major deal with Richard Branson's Virgin Group.
Virgin Group announced Thursday that it has invested in Hyperloop One, a startup that's working on constructing the high-speed transit system Elon Musk first outlined in a white paper in 2013. The terms of the deal weren't disclosed, but the investment was significant enough that Hyperloop One will now be called Virgin Hyperloop.
"After visiting Hyperloop One’s test site in Nevada and meeting its leadership team this past summer, I am convinced this groundbreaking technology will change transportation as we know it and dramatically cut journey times," Virgin Group founder Richard Branson said in a press release.
Hyperloop One is headed by Shervin Pishevar, a Silicon Valley venture capitalist best known as an early investor in Uber. The Los Angeles-based startup is conducting feasibility studies in Dubai and Finland for the transit system. US cities like Denver and Nevada have alsosubmitted proposals for the construction of a Hyperloop.
The Hyperloop is a nascent transportation system that works by shooting pods through a vacuumed-sealed tube at speeds that experts say could reach 700 mph.
In August, the Hyperloop traveled almost the entire length of the track and reached a top speed of 192 mph. It reached a top speed of 70 mph after traveling 300 feet a month prior. The company still needs to demonstrate that the system could transport people on tracks long enough to connect cities.
Google cofounder Sergey Brin with a Loon balloon.Yudhi Mahatma/Antara Foto/Reuters
Alphabet has quietly upgraded its internet balloon initiative from a research lab "project" to an official corporation, setting the stage for what could be the latest standalone business to spin out from Google's parent company.
Project Loon, which develops solar-powered balloons that beam internet access down to earth, has been incorporated as Loon Inc, according to regulatory filings.
Business Insider first noticed Loon was listed as "Loon Inc." in a recent filing to the FCC seeking permission to float Loon balloons above Puerto Rico and provide internet access to areas affected by hurricane Maria. Previously, Loon was officially referred to as a project under Alphabet, or under X, the Alphabet subsidiary dedicated to creating ambitious "moonshot" technologies.
Loon's incorporation is a sign that Alphabet may be preparing to spin Loon out of the X division and let it operate as its own company. Alphabet went through a similar process last year with Waymo, the company formed out of X's self-driving car project. X also spun out Dandelion, a geothermal energy company, earlier this year, but Dandelion is not under the Alphabet umbrella.
Getting spun out from the mothership often indicates that Alphabet believes an experimental technology or product has matured enough to be ready for commercialization. That gives the spinout company the freedom to pursue its own business objectives, while at the same time subjecting it to the financial pressures of an independent business.
Google has previously said that it believes Loon's "floating cell towers in the sky" could one day become a business that generates billions of dollars in revenue. So far however, Loon has only seen limited deployments in areas like Sri Lanka, Peru, and more recently Puerto Rico.
As a standalone company, Loon would join a growing roster of Alphabet subsidiaries, known as "Other Bets," such as high-speed internet service Access, smart appliance maker Nest and Waymo, the self-driving car company. In the second quarter of the year, Alphabet's Other Bets posted an operating loss of $772 million, on revenue of $248 million.
Spencer Hosie, a lawyer representing Loon competitor SpaceData in a lawsuit, told Business Insider that he noticed the change in Loon's status in the recent FCC filing. Hosie said he plans to add Loon Inc. as a defendant in SpaceData's case against Alphabet. It's unclear when exactly Alphabet incorporated Loon, but Hosie said he suspected that it could have been as recently as last week.
New Nokia Android flagship from HMD Global should be out in the next six weeks
HMD Global's upcoming Nokia 9 flagship Android smartphone will have a near-bezel-free display but, like many flagship phones released these days, won't bear a standard headphone jack.
And, with the device set to be launched before the end of the year - realistically, within the next six weeks - 3D renders have been leaked and published by @OnLeaks for CompareRaja.
They show that, in addition to the bezel-less display and lack of headphone jack, the Nokia 9 will sport both a curved screen and rear, similar to last year's Galaxy S7 Edge.
It'll be shorter than the S7 Edge, though, with measurements of 150.9x72.6x7.7mm, which suggests it'll sport smaller bezels above and below its rumoured 5.5in QHD display.
The screen doesn't look quite as bezel-free as that on the Galaxy S8, though, but it is bezel-free enough that the fingerprint scanner has been moved to the rear of the device. This sits alongside a vertically-aligned dual-lens camera, which CompareRaja notes will likely weight in at "12 or 13MP".
The renders also show that, like the Google Pixel 2, HMD Global will ditch the 3.5mm headphone jack in favour of audio over USB-C.
If the Nokia 8 is anything to go by, the Nokia 9 will arrive running Android 7.1.1, and HMD's largely-unobtrusive user interface likely means that it'll be promptly updated to Android 8.0.
Elsewhere, specs are said to include the Snapdragon 835 processor, 4GB of RAM and 64/128GB onboard storage options, and the Nokia 9 is also expected to be HMD's first smartphone to feature a built-in iris scanner. ComparaRaja notes that OZO audio and array of mics for active noise cancellation will also likely be included.
There's no word yet as to when the Nokia 9 will become official, but earlier rumours point to a launch in the fourth quarter, which realistically means before the end of November, at the latest.
People should have more confidence in their innate 'wisdom', says Ma, at Alibaba cloud computing event
Alibaba founder Jack Ma
Billionaire Alibaba entrepreneur Jack Ma has said that artificial intelligence won't make human beings redundant in a keynote speech at Alibaba Cloud's Computing Conference in Hangzhou, China.
Ma's attitude to AI is contrary to some of the more apocalyptic warnings from Western technology entrepreneurs such as Bill Gates and Elon Musk, not to mention physicist Stephen Hawking.
Ma argued that human beings ought to have more confidence in their abilities, particularly the ‘wisdom' they possess that AI will never have.
"People are getting more worried about the future, about technology replacing humans, eliminating jobs and widening the gap between the rich and the poor," said Ma. "But I think these are empty worries. Technology exists for people. We worry about technology because we lack confidence in ourselves, and imagination for the future."
That wisdom, he added, is reflected not by the losses of the world's best Go players to the IA-powered AlphaGo computer, but in the creation of the game in the first place. "AlphaGo should compete against AlphaGo 2.0, not us. There's no need to be upset that we lost. It shows that we're smart, because we created it," he said.
However, while humanity isn't about to be handed a collective P45 by an intelligent robot, it could start to enjoy much shorter working weeks as more intelligent tools are adopted, conjectured Ma.
Some time within the next 30 years, he suggested, people will have both shorter working weeks and shorter working days - but still feel busier than ever.
"My grandfather worked 16 hours a day on a farm and felt that he was very busy. He had only one day off a week. I have two days off a week, I work for eight hours a day, and I feel even busier than my grandfather," he said.
Ultimately, thought, Ma said that no-one really knows what the future will hold. "Anything that can be clearly defined is not the future. When faced with the future, we're all kids; no one's an expert," he said.
Data protection is an essential component in any data management strategy, and one that all system and storage administrators should fully embrace.
We take backups for various reasons: hardware can fail, software has bugs, and users make mistakes and delete or change data unintentionally.
There is also the risk of deliberate and malicious attempts to destroy or encrypt data for financial gain or to “get back” at a previous employer.
People say you only find out how good your insurance cover is when you make a claim. With backups, we don’t want to wait until we need to restore data to find out whether our backups are any good.
Data recovery can be a stressful scenario that doesn’t need the additional pressure of worrying whether backups are valid or not.
The solution, of course, is to test that backups have worked by restoring data.
Historically, this was a difficult and time-consuming task that was limited in terms of what was possible.
When there was a physical server for each application, restoring data meant having additional hardware on which to perform the restore process. It was not possible or practical to recover to the production environment in anything other than a limited way.
So a full restore of an entire platform was rarely done. Other reasons included the conflict of the restored system with the production one – of which more later.
But with the widespread adoption of virtualisation, things have become much easier.
A virtual machine (VM) is just a set of files that contain the operating system and data of the VM, plus details on VM configuration (processor count, memory, network, and so on). This means that a VM can easily be recovered from backups and powered up to validate that the application can be recovered and made accessible.
It is worth remembering that testing the restore of an application provides two purposes. First, it validates that the restore does actually work. Second, it provides a benchmark to ensure that the recovery process can be completed within agreed service levels – mainly recovery time objectives (RTOs).
Regular testing can be provided back to the business to show that application recovery targets can be met, or perhaps reviewed if the process cannot be completed in time.
Backups: What to test?
At this point, we should think about exactly what we want to recover as part of a test. There are multiple levels to consider:
File recovery – Can I recover individual files from the backup? This process is easy to apply to physical and virtual servers, as well as backups of file servers. The choice of data to recover really depends on what data is being stored. It could make sense to recover the same file each time, or to recover new data each time. Automation can have a benefit here, which we will cover later.
VM recovery – Can I bring back a virtual machine and power it up? This is clearly one for virtual environments, rather than physical ones. Recovering a virtual machine image is relatively easy, but consideration has to be given to where the VM will be powered up. Starting the VM on the same production environment brings up immediate issues of network IP conflicts, and SID conflicts for Windows systems. There may also be issues with whatever application services the VM offers. The choice here is to power up the VM in an isolated environment (which can be done using a “DMZ” subnet on the hypervisor) and provide access only through that DMZ network. Be aware that powering up recovered VMs with new IDs may have an impact on application licensing. Check with your software provider on what the terms and conditions allow.
Physical recovery – Physical server recovery is more complex and depends on the configuration of the platform. Some servers may boot from SAN, whereas others may have local boot disks. The recovery process then depends on the configuration. Recovering an application to alternative hardware removes a lot of risk, but it does not fully represent the recovery process. Recovering an application to the running hardware means an outage and so the test is likely to have more risk and be carried out less frequently.
Data recovery – Depending on the backup process, data recovery can be an option in testing. For example, if data in a database is backed up at the application level (rather than the entire VM), then data can be restored to a test recovery server and accessed in an isolated environment.
Application recovery – Full application testing can be more complex because it relies on understanding the relationships between individual VMs and physical servers. Again, recovering a suite of servers as part of full application testing is best done in an isolated environment with separate networking.
It is clear that more extensive testing has impact and risk, but can provide more reassuring results. Choosing a recovery test scenario depends on the backup and restore methodology in use. If the recovery process is to restore an entire VM, then that is what the test needs to do. If the recovery process means rebuilding a VM and recovering the data, then that is what the test process should reflect.
Backups: How often to test
How often should testing be performed? In an ideal world, a test should be scheduled after every backup to validate that the data has been successfully secured. This is not always practical, so there is a trade-off to be made between the impact and effort of recovery and having a degree of confidence in the restore.
As a minimum, there are four options:
As part of a regular cycle (for example, monthly). Schedule a restore test for each application on a regular interval.
When an application changes significantly (patches, upgrades, for instance). Schedule a (more comprehensive) restore test when significant changes have been made to an application, such as upgrading to a new software release or when installing a major patch package or operating system change.
When application data changes significantly. If an application has a regular import of data from an external source, for example, performing a test restore can help validate timings for data recovery.
When a new application is built. This means testing the restore of a new VM or server when first created. This may seem excessive, but it makes sense to ensure that the server/VM has been added to the backup schedule.
The ability to test recovery can be significantly improved by the use of automation. At the most basic level, this can mean scripting the restore of individual files. But more complex testing can be done with the use of software tools, many of which are integrated into backup software products.
Veeam and Zerto are two companies that provide the ability to automate the testing of restores without affecting the production environment.
Suppliers such as Rubrik and Cohesity offer dedicated hardware platforms to manage backup data and can be used as a temporary datastore for recovered VMs. This allows recovery to be scripted and automated relatively easily.
These solutions are mostly focused around VM recovery, so more complex scenarios (such as recovering a Microsoft Exchange platform) may need additional manual steps (especially to confirm the application is actually working). This means setting some definitions around what successful recovery looks like – either the ability to get back individual files or, at the most detailed level, the ability to access the application being recovered.
As we move into a hybrid cloud world and increasing use of containers, backup testing offers challenges and opportunities. Having public cloud as a backup target allows applications to be recovered and tested in the cloud, reducing on-premise costs. Containers represent a new application deployment paradigm, so will have challenges around backup and restore. As we move forward, the fundamentals remain the same – check your backups regularly and ensure recovery processes are well documented.
Key to GDPR compliance – with relation to retention of data and storage – are the importance of personally identifiable data and the right to be forgotten.
Meanwhile, the right to be forgotten allows individuals to request that data be deleted without “undue delay”.
All this places onerous requirements on how organisations retain data, as well as their ability to find and deal with it.
In this podcast, ComputerWeekly.com storage editor Antony Adshead talks with CEO of Vigitrust, Mathieu Gorge, about the implications for storage of GDPR’s requirements on personally identifiable data and the right to be forgotten.
Antony Adshead: How do we ensure we can locate personal data?
Mathieu Gorge: First of all you need to define what personally identifiable data is in GDPR. Essentially, it is any type of data that could put any type of data subject in Europe at risk, whether you store, process or work on that data in the EU or not.
The key challenge that we’re seeing in the market right now is that most organisations do not know where the data is or what type of data they have.
For example, do they have data that is covered by GDPR, do they have other data that is not covered by GDPR, do they take credit card holder data, do they take protected health information data, and where is that data located?
Where within their ecosystem can they find it? Is it on their on network, their subsidiaries, do they exchange data with partners, suppliers, cloud applications and so on?
So, to do that what they need to put in place is a data discovery exercise that will allow them to map out where data covered by GDPR is located, where it is coming from, where it is going to, [and] what what kind of processing it is taking on.
Then they can classify the data and use some tools to do that and move onto the next level, which is how to manage access to that data in such a way that I guarantee under GDPR I have taken what is known as “appropriate security measures” to protect the data, and ensure that I know at any given time that the data is fairly and appropriately managed and protected.
Adshead: How can we enable the right to be forgotten in storage systems?
The idea is that under the eight principles of data protection you need to obtain data and process it fairly; you only need to keep it for one or more specified explicit and legal purposes; you can only disclose it in ways that are compatible with these purposes; it needs to be kept safe and secure, accurate, complete and up to date; and you need to ensure it is adequate and relevant.
What’s really important in those principles is the fact that you can only retain it for the amount of time that is necessary for the purpose, and you need to give a copy of the personal data to the individual on request and ensure that – if they tell you they no longer want you or allow you to have that data – it can be erased.
And so, the right to be forgotten is really about putting in place the right processes, the right technology and the right training in your organisation to make sure that [you can fulfil a request] if someone says to you, ‘I no longer want you to have the data’ or ‘The data that you have about me is no longer accurate, I want you to take corrective action’.
That corrective action could be, ‘Please erase the data’, or it could be, ‘Please update the data to the appropriate level of data’.
And so, I go back to the previous question, which is that you need to be able to locate your data, you need to have data classification in such a way that if someone rings you and says, ‘I want you to delete that data because it is no longer accurate’, or, ‘You are using the data for a purpose that is no longer the purpose I gave you consent for’, then you need to be able to take action fairly quickly.
I think we will see that the regulators in the EU will look at the right to be forgotten as one of the main topics when they start to enforce GDPR.
Adshead: When will GDPR actually come into force?
Gorge: May 2018, although some European member states have already brought that forward and put GDPR into their own regulation ahead of May 2018.
So, again the advice is if you are not in compliance, you should at least be able to demonstrate that you have a roadmap to compliance by May 2018.
Image captionSurveys suggest children find it hard to avoid bullying and abuse on social media platforms
Facebook and Twitter could be asked to pay for action against the "undeniable suffering" social media can cause, the culture secretary has said.
Cyber-bullying, trolling, abuse and under-age access to porn will be targeted in plans drawn up by Karen Bradley to make the online world safer.
Ms Bradley wants social media groups to sign up to a voluntary code of practice and help fund campaigns against abuse.
She also wants social media platforms to reveal the scale of online hate.
Almost a fifth of 12 to 15-year-olds have seen something they found worrying or nasty, and almost half of adults have seen something that has upset or offended them, on social media - according to the government.
Tech groups 'willing'
Despite promising to introduce new laws regulating the internet in the Conservative Party's manifesto, Ms Bradley told the BBC that legislating would take "far too long".
Ms Bradley said that the plan was for a "collaborative approach" with internet groups, adding that she sees a "willingness from them".
She added: "Many of them say: 'When we founded these businesses we were in our 20s, we didn't have children… now we're older and we have teenagers ourselves we want to solve this".
Ms Bradley said the internet had been an "amazing force for good, but it has caused undeniable suffering and can be an especially harmful place for children and vulnerable people".
"For too long there's been behaviour online that would be unacceptable if it was face-to-face."
One of the proposals is for an annual transparency report which could be used to show:
the volume of content reported to companies and the proportion taken down
how users' complaints are handled
categories of complaints, including from under-18s, women, the LGBT community or on religious grounds
information about how each site moderates content
Ms Bradley said that the government "could legislate in the future", adding that any changes to existing law would be underpinned by the following principles:
What is unacceptable offline, should be unacceptable online
All users should be empowered to manage online risks and stay safe
Technology companies have a responsibility to their users
The government also wants to see a new body, similar to the UK Council for Child Internet Safety, to consider all aspects of internet safety.
In response to the consultation, Facebook said: "Our priority is to make Facebook a safe place for people of all ages which is why we spent a long time working with safety experts like the UK Safer Internet Centre, developing powerful tools to help people have a positive experience."
"We welcome close collaboration between industry, experts and government to address this important issue."
'Unique set of risks'
A spokesperson for the NSPCC said keeping young people safe online was "the biggest child protection issue of our time".
"Social media companies are marking their own homework when it comes to keeping children safe, so a code of practice is definitely a step in the right direction but 'how' it is implemented will be crucial.
"Young people face a unique set of risks when using the internet and it is important any strategy recognises the challenges they face online and requires industry to act to protect them."
Vicki Shotbolt, chief executive at social enterprise Parent Zone, said it was encouraging to see the government taking "concrete steps" to make the internet a safer place for children.
Asking social-media companies to contribute towards the costs of educating the public about online dangers has precedence in the gambling industry, which currently contributes an amount to the treatment of gambling addiction.
The government also wants to see online safety given more attention at schools, with social-media safety advice built into existing education programmes.
The consultation will close on 7 December, and the government expects to respond in early 2018.
The Google Home Mini in a Google-exclusive "coral red" color.Matt Weinberger/Business Insider
Google rushed out a fix to a glitch in its latest smart speaker last week that caused the device to surreptitiously record the conversations of its early testers without their knowledge or consent.
The bug affected a small number of the Google Home Mini devices that the company handed out to reporters at its press event last week, according to the Google. The company rolled out a software update over the weekend to address the issue on those devices and is exploring a long-term fix.
"We learned of an issue impacting a small number of Google Home Mini devices that could cause the touch mechanism to behave incorrectly," the company said in a statement, adding, "If you're still having issues, please feel free to contact Google support."
Google unveiled the $50 Mini, which goes on sale on October 19, at its event on Wednesday. Soon after, Android Police's Artem Russakovskii, who was one of the reporters who received a test unit, discovered that his device was turning on by itself, recording his conversations, and uploading them to Google.
Normally, there are two ways to interact with Google's smart speakers, including the Mini. You can say the words "OK Google," followed by a command such as "play 'Bohemian Rhapsody.'" Alternatively, you can press the button located on the top of the devices instead of saying "OK Google."
But Russakovskii discovered that his Mini was listening in on him even when he hadn't pressed the device's button or said, "OK Google." When he checked his personal activity page on Google, the site that shows users' interactions with the search giant's services and the data it collects on users, he found sound files that had been uploaded to Google's servers from the Mini without his consent.
Google blamed the glitch on a faulty button in some of the units. The buttons on those Minis were detecting touches even when there was no touch to detect. Russakovskii apparently got one of the defective devices.
On October 7th, three days after it handed out the Mini review units, Google rolled out a software update that disables the button. The change affects every Mini it's handed out, even those that weren't malfunctioning. Meanwhile, the company says it's deleted all the data recorded from alleged button pushes on the Mini review units — whether they were actual button pushes or not — from the time it handed out the devices to reviewers until it issued the update.
Ultimately, the problem appears to be a simple error, not a malicious act of spying. And the company is looking for a long-term solution.
But the glitch is one that Google would certainly have liked to have avoided for multiple reasons, as The Verge notes.
The bug could not only help undermine sales of the Mini but hamper Google's broader effort to turn itself into a top-tier hardware maker. Smart speakers like the Mini rely on customers' trust; it's an act of faith for consumers to let Amazon or Google place a microphone in their houses. They generally expect the companies to only record them when they're aware of it.
Worse, the nature of the glitch is likely to play into consumers' worst fears about the search giant. Lots of people are already sensitive to the fact that Google is collecting tons of data on its customers. And the company has previously been taken to task for collecting data without consumers' consent. Back in 2010, Google admitted its Google Maps Street View cars had been sucking up e-mails and passwords from unencrypted WiFi networks as the cars mapped neighborhoods around the country and world.
SAN FRANCISCO — Mark Zuckerberg apologized after a live-streamed virtual trip to hurricane-ravaged Puerto Rico to promote Facebook's Spaces app drew sharp criticism on social media.
"My goal here was to show how (virtual reality) can raise awareness and help us see what's happening in different parts of the world. I also wanted to share the news of our partnership with the Red Cross to help with the recovery. Reading some of the comments, I realize this wasn't clear, and I'm sorry to anyone this offended," the Facebook CEO wrote.
On Monday Zuckerberg and Rachel Franklin, who runs Facebook's social virtual reality efforts, embarked on what they called a "magical" tour of Puerto Rico where many residents are still without power, food, supplies and medical care.
The background was a 360-degree video from NPR which showed flooded streets and people clearing debris. Facebook's Spaces app lets you create a 3-D avatar and communicate with other avatars in a virtual space using an Oculus Rift VR headset.
"One of the things that’s really magical about virtual reality is that you can get the feeling that you are really in a place,” Zuckerberg's cartoon avatar says.
Zuckerberg also announced Facebook would help build "population maps" to help the Red Cross pinpoint where help is needed.
The response to the virtual reality stunt was a bit of a, well, disaster. On social media Zuckerberg was called a "heartless billionaire" and accused of "exploiting disaster."
On Wednesday, Facebook hosts its annual Oculus conference for virtual reality software developers.
The GDPR will be enforced from 25 May 2018. UK organisations that process the personal data of EU residents have only a short time to ensure that they are compliant.
Introduced to keep pace with the modern digital landscape, the GDPR is more extensive in scope and application than the current Data Protection Act (DPA). The Regulation extends the data rights of individuals, and requires organisations to develop clear policies and procedures to protect personal data, and adopt appropriate technical and organisational measures.
The Regulation mandates considerably tougher penalties than the DPA: organisations found in breach of the Regulation can expect administrative fines of up to 4% of annual global turnover or €20 million – whichever is greater. Fines of this scale could very easily lead to business insolvency. Data breaches are commonplace and increase in scale and severity every day. As Verizon’s 2016 Data Breach Investigations Report reaffirms, “no locale, industry or organization is bulletproof when it comes to the compromise of data”, so it is vital that all organisations are aware of their new obligations so that they can prepare accordingly.
UK organisations handling personal data will still need to comply with the GDPR, regardless of Brexit. The GDPR will come into force before the UK leaves the European Union, and the government has confirmed that the Regulation will apply, a position that has been confirmed by the Information Commissioner.