AI and robotics developers should sign a pledge to only create ethical technologies, says a new report from the RSA and YouGov.
The document, The Age of Automation, says: "Ethics training should be made a compulsory part of graduate computer science degrees, potentially culminating in a pledge akin to a Hippocratic Oath."
The report warns of AI and automated systems helping to reinforce existing biases in human society.
There is evidence that commentators' fears of a machine-dominated future - Elon Musk and Sir Stephen Hawking being among those warning of an apocalytic future - are both recognised and shared by some providers.
A number of tech companies have now committed to creating new industry standards for the ethical development of AI. Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft are founding partners of www.partnershiponai.org.
The organisation says, "We are at an inflection point in the development and application of AI technologies. The upswing in AI competencies, fuelled by data, computation, and advances in algorithms for machine learning, perception, planning, and natural language, promise great value to people and society.
"However, with successes come new concerns and challenges based on the effects of those technologies on people's lives. These concerns include the safety and trustworthiness of AI technologies, the fairness and transparency of systems, and the intentional as well as inadvertent influences of AI on people and society."
In January, an EU Parliament investigation recommended the development of an advisory code for robotic engineers, and urged legislators to give robots 'personhood' status.
"These efforts should continue, but must not happen behind closed doors", says the RSA report.
Then on Thursday, Mark Zuckerberg said he was handing over details of more than 3,000 advertisements bought by groups with links to the Kremlin, a move made possible by the advertising algorithms that have made Mr Zuckerberg a multi-billionaire.
Gross misconduct, you might say - but of course you can’t sack the algorithm. And besides, it was only doing what it was told.
“The algorithms are working exactly as they were designed to work,” says Siva Vaidhyanathan, professor of media studies at the University of Virginia.
Which is what makes this controversy so extremely difficult to solve - a crisis that is a direct hit to the core business of the world’s biggest social network.
Facebook didn’t create a huge advertising service by getting contracts with big corporations.
No, its success lies in the little people. The florist who wants to spend a few pounds targeting local teens when the school prom is coming up, or a plumber who has just moved to a new area and needs to drum up work.
Facebook’s wild profits - $3.9bn (£2.9bn) between April and June this year - are due to that automated process. It finds out what users like, it finds advertisers that want to hit those interests, and it marries the two and takes the money. No humans necessary.
But unfortunately, that lack of oversight has left the company open to the kinds of abuse laid bare in ProPublica’s investigation into anti-Semitic targeting.
Image copyrightGETTY IMAGES
Image captionMark Zuckerberg resembled a "improbably young leader", the New York Times wrote
“Facebook’s algorithms created these categories of anti-Semitic terms,” says Prof Vaidhyanathan, author of Anti-Social Network, a book about Facebook due out later this year.
“It’s a sign of how absurd a human-free system can be, and how dangerous a human-free system can be.”
That system will be slightly less human-free in future. In his nine-minute address, a visibly uncomfortable Mark Zuckerberg said his company would be bringing on human beings to help prevent political abuses. The day before, its chief operating officer said more humans would help solve the anti-Semitism issue as well.
“But Facebook can’t hire enough people to sell ads to other people at that scale,” Prof Vaidhyanathan argues.
“It’s the very idea of Facebook that is the problem."
Mark Zuckerberg is in choppy, uncharted waters. And as the “leader” (as he likes to sometimes say) of the largest community ever created, he has nowhere to turn for advice or precedent.
This was most evident on 10 November, the day after Donald Trump was elected president of the United States.
When asked if fake news had affected voting, Mr Zuckerberg, quick as a snap, dismissed the suggestion as a “crazy idea”.
That turn of phrase has proven to be Mr Zuckerberg’s biggest blunder to date as chief executive.
Image copyrightGETTY IMAGES
Image captionSince Trump's election win, Facebook's influence has been under question
His naivety about the power of his own company sparked an immense backlash - internally as well as externally - and an investigation into the impact of fake news and other abuses was launched.
On Thursday, the 33-year-old found himself conceding that not only was abuse affecting elections, but that he had done little to stop it happening.
"I wish I could tell you we're going to be able to stop all interference,” he said.
"But that wouldn't be realistic. There will always be bad people in the world, and we can't prevent all governments from all interference.”
A huge turnaround on his position just 10 months ago.
“It seems to me like he basically admits that he has no control over the system he has built,” Prof Vaidhyanathan says.
No wonder, then, that Mr Zuckerberg "had the look of an improbably young leader addressing his people at a moment of crisis”, as the New York Times put it.
Wolves at the door
This isn’t the first time Facebook’s reliance on machines has landed it in trouble - and it would be totally unfair to characterise this as a problem just affecting Mr Zuckerberg’s firm.
Just in the past week, for instance, a Channel Four investigation revealed that Amazon’s algorithm would helpfully suggest the components you needed to make a homemade bomb based on what other customers also bought.
At least two high-profile US senators are drumming up support for a new bill that would force social networks with a user base greater than one million to adhere to new transparency guidelines around campaign ads.
Mr Zuckerberg’s statement on Thursday, an earnest pledge to do better, is being seen as a way to keep the regulation wolves from the door. He - and all the other tech CEOs - would much prefer to deal with this his own way.
But Prof Vaidhyanathan warns he might not get that luxury, and might not find much in the way of sympathy or patience, either.
“All of these problems are the result of the fact Zuckerberg has created and profited from a system that has grown to encompass the world… and harvest information from more than two billion people."
Image captionWould you be better off with an automated financial adviser?
Automated financial advice delivered by computer algorithm - often dubbed robo-advice - is a fast-growing business. But should you entrust your life savings to a computer?
For many of us, talking about money is embarrassing - revealing our income and spending habits can feel like disrobing in public.
So it's no wonder seeking investment advice from an impersonal, unbiased computer program is proving so popular.
Consultancy firm Accenture found that 68% of global consumers would be happy to use robo-advice to plan for retirement, with many feeling it would be faster, cheaper, and more impartial than human advice.
"Many of our clients say they feel awkward in face-to-face meetings, preferring an online experience where they don't feel judged," says Lynn Smith, a director of robo-advice firm Wealth Wizards.
So how does robo-advice work and is it really any better than traditional financial advice?
Robo-adviser firms use algorithms to analyse your financial situation and goals and then work out an investment plan to suit you.
Basically, you answer lots of questions online about your income, expenses, family situation, attitude to risk, and so on, and then the algorithm allocates your savings to a mix of investments, from index funds that aim to mimic a particular stock market index or sector, to fixed-income bonds.
Image copyrightGETTY IMAGES
Image captionRobo-adviser algorithms allocate your cash to a balanced mix of investments
As some investments are riskier than others, younger investors will generally have their portfolios weighted towards higher-risk, higher-growth investments, whereas older investors approaching retirement will see the balance of their portfolios weighted towards lower-risk, fixed income investments, such as government bonds or gilts.
Joe Ziemer, vice president of communications at Betterment, a US robo-adviser with more than $9bn under management, says: "The Betterment service takes your information and uses a series of algorithms to create an asset allocation plan, which might be, for example, 90% equities and 10% bonds for a retirement saver."
The crucial point is that these algorithms work everything out for you at much lower cost than many traditional wealth advisory firms.
Wealth Wizards, for example, typically charges £65 for investments up to £30,000, and 0.30%, or £300, on a £100,000 investment pot. Betterment charges 0.25% a year.
That's peanuts compared to human advisers' fees, which come in at about £580 for advice on a £200-a-month pension contribution, or £1,000-£2,000 for guidance on what to do with your £100,000 pot when your retire, according to UK adviser network Unbiased.
Many of these robo-advisers will offer human advice as well - for any extra fee - if your finances are more complicated or you need tax planning services as well.
"When a client needs advice spanning a number of different regulatory regimes, human advice will be required," says John Perks, managing director of life and pensions at UK insurer LV, which launched its Retirement Wizard robo-advice service two years ago.
Image copyrightGETTY IMAGES
Image captionMany of us are facing poverty in retirement because we're not saving enough, yet living longer
So could these cheaper investment services encourage more of us to save more?
The powers that be certainly hope so.
World Economic Forum figures show the collective retirement savings gap of the world's largest economies will hit $400tn (£307tn) by 2050, meaning a lot of people could be spending their retirements in poverty.
Governments are concerned that this might then place an unsustainable burden on welfare systems.
Robo-advice is certainly growing in popularity.
Market research aggregator Statista says the US market will grow 29% per year between now and 2021, and forecasts that the number of Chinese investors using robo-advice services will jump from two million to 79.4 million in the same period.
While Consultancy AT Kearney forecasts that robo-advisers will be managing $2.2tn within five years, representing a 68% annual growth rate.
Image captionFashion stylist Donna McCulloch says Instagram "is your shop front"
Instagram, the Facebook-owned photo app, has become a lucrative shop window for many small entrepreneurs. So what are the secrets of its success?
When Facebook bought the photo app Instagram in 2012 for a cool $1bn (£760m), eyebrows were raised at the value the tech giant had placed on this 18-month-old start-up.
Fast forward to 2017, and while Instagram may still be Facebook's little sister, it has built a sizeable community of 700 million users - dwarfing both Twitter and Snapchat.
With improved photo filters and the addition of Instagram Stories, a feature that lets users upload short videos that disappear after 24 hours, the platform has become a big hit with freelancers and small organisations looking to reach new audiences.
So how can you use it to make money?
"Instagram is your shop front," says Donna McCulloch, a fashion stylist who works under the name Sulky Doll.
"People don't ask for business cards any more - they ask for your 'handle' [Instagram nickname]. It's instant - you both get your phones out, and you're connected."
Image copyrightCAT MEFFAN
Image captionYoga enthusiast Cat Meffan was "shocked and excited" by Instagram's marketing power
For yoga instructor Cat Meffan, the glamorous images she posts of herself in impressive yoga positions in picturesque locations around the world are intended to inspire and motivate her 77,000 followers.
But they also help her to build her business.
"I sold out my first yoga retreat in five days and all I did was put up one Instagram post," she says.
"I was extremely shocked and excited. That's the power of Instagram."
Cat says she'll spend up to an hour crafting the captions alongside her photos - sometimes more than she'll spend on taking the photo itself.
"Sometimes I'll go out and do a photoshoot with my partner. But usually it's me with a self-timer or holding the phone."
Like Donna, Cat finds adding hashtags to her photos a useful way of reaching a new audience. A search for #yoga, for example, will bring up her images, along with those of others, while Donna's #OOTD (Outfit of The Day) are by far her most popular.
"It's a nice way of finding like-minded people," says Cat.
Image copyrightGETTY IMAGES
Image captionSinger Selena Gomez has 126 million followers on Instagram
Both women also use the Stories feature to post videos which, they say, show them as they really are - an antidote to the artificial gloss that many Instagrammers are notorious for adding to their images.
"Stories allow people to get more of a handle on you as a person and a brand," says Donna.
"Stories are like peeking behind the net curtains. The biggest compliment is when people say you come across the same in real life as you do on your feed [Instagram page]."
Both Cat and Donna have built their Instagram pages tightly around a very specific theme - yoga/wellness and fashion, respectively.
That's important if you want to grow the number of people who follow you says Danny Coy, a photographer with 173,000 followers who now also works as an Instagram consultant.
For £300 a month his firm Vibrance says it can "typically" grow an account by 2,000 followers every four weeks. Techniques for attracting followers include posting regularly and having a bank of interesting images to hand.
"You don't have to post every day, but engagement peaks - after 24 hours it's done," he says.
"It's important to stick to your niche."
Image copyrightDANNY COY
Image captionPhotographer Danny Coy thinks Instagrammers should stay focused on what they do best
That's Instagram's advice, too.
"If you tell a different story every time you come to Instagram people will struggle to understand what you're trying to communicate," says Jen Ronan, the firm's head of small business for Europe, Middle East and Africa.
"Make sure you're thoughtful about what you want your customers to know and ensure that you're consistently reinforcing this over time."
Many of Danny's clients are companies, he says, who want to boost their numbers in order to look "legitimate" on the platform.
"From time to time it'll be an up-and-coming photographer who feels they can't get the numbers they deserve," he says. "Everyone has to start somewhere."
Instagrammers with a significant number of followers may be approached by brands seeking "influencers" or "ambassadors" to represent them - for a fee.
More Technology of Business
Image copyrightGETTY IMAGES
ncorporating brand products and imagery into photos and videos can be a lucrative sideline, although you have to make clear which content is sponsored under Instagram rules.
Donna McCulloch doesn't do it: "I think I would lose my integrity," she says, although she does admit to wearing clothes she's been given.
"But it's because I wanted it," she maintains.
And Cat Meffan says she spends a lot of time "saying no" to brands she doesn't think are right for her - but she does accept some.
"There's no set fee in the Instagram world," she says. "You have a discussion [with the brand] and you have what you think you're worth."
Danny Coy says: "Eighteen months ago I could easily be turning over £2,000-£3,000 a month in terms of influencer content."
But he says the market is tailing off because brands have wised up. If an Instagrammer tags a brand in a post independently, the brand can use the image without payment.
"Most will ask first," he says. "But once you've tagged them and put it on Instagram they don't have to ask your permission."
Image copyrightMARIANN HARDEY
Image captionMariann Hardey thinks the Instagram community doesn't mind branded content if it's entertaining
But isn't it a bit of a turn-off being marketed to by people whose content you admire? And do viewers sometimes not realise they are looking at paid-for content?
Mariann Hardey, assistant professor of marketing at Durham University, thinks the Instagram community isn't that gullible.
"It's easy to get het up that influencers are taking over and people don't understand they are seeing paid content, but the main users of Instagram are extremely savvy at being able to filter content that is branded or sponsored," she says.
What's most important is "whether the post is fun" and the pictures are "pretty", she adds.
So, the consensus seems to be that if the sponsored Instagrammer is well-liked and engaging, and the content is entertaining, Generation Instagram doesn't mind.
Image captionCan artificial intelligence help solve the issue of online extremist content?
There is a long history of governments and technology companies falling out over extremist content. It has largely gone this way.
Impatient minister: "You've got some of the brightest boffins on the planet - if they can get an AI to tell me whether I need to take an umbrella tomorrow, why can't they sort out terrorism?"
Smug technology tycoon: "Now, listen, this is all very complex, so much so that it's not worth me bothering to explain it all to a numbskull with a history degree like you. Let's just say this, it's the internet, it's beyond the control of local politicians, it's huge and we don't decide how people use it - nor would we want to."
That of course is an outdated caricature. Things have moved on - politicians have gained a better understanding of the challenges posed by the internet, and the technology giants have finally woken up to political reality and accepted some responsibility for what is posted on their platforms.
Image captionTheresa May will demand technology companies take down terror content within two hours
So, when Theresa May - or any other leading politician - demands action, they now nod sympathetically and say, 'We will do our best to help out.'
Having seen a little pressure result in movement on issues ranging from child abuse images to music piracy, the politicians are confident that they can get movement on extremist content.
But the technology companies still think the politicians have a simplistic view of the world.
They will point out the challenge of deciding what is extremist content - if you are going to ban sermons from firebrand preachers, does that include those from all religions?
They will ask whether we really want the likes of Mark Zuckerberg and Larry Page to have the power to determine what is acceptable - and how that will play in countries such as the United Sates that have free speech at the heart of their constitutions.
And they will point out that while Western politicians can put pressure on companies such as Facebook and Google, that will cut no ice with some of the platforms favoured by terrorist sympathisers - such as Telegram, founded by a Russian entrepreneur.
My prediction is that in a month the politicians and the technology tycoons will agree that they have made progress - and that artificial intelligence (AI) can help solve this problem - but there is more to do. And then when the next terrorist outrage happens, the blame game will start again.
In the meantime, expect Wednesday's line from Downing Street that "these companies have some of the best brains in the world" and should be able to solve the world's problems to be trotted out at regular intervals.
At the general assembly on Wednesday, the prime minister will hail progress made by tech companies since the establishment in June of an industry forum to counter terrorism.
But she will urge them to go "further and faster" in developing artificial intelligence solutions to automatically reduce the period in which terror propaganda remains available, and eventually prevent it appearing at all.
Media captionGoogle's general counsel Kent Walker defended its anti-terrorism efforts on BBC Radio 4's Today
Together, the UK, France and Italy will call for a target of one to two hours to take down terrorist content wherever it appears.
Internet companies will be given a month to show they are taking the problem seriously, with ministers at a G7 meeting on 20 October due to decide whether enough progress has been made.
Kent Walker, general counsel for Google, who is representing tech firms at Mrs May's meeting, said they would not be able to "do it alone".
"Machine-learning has improved but we are not all the way there yet," he told BBC Radio 4's Today programme, in an exclusive interview.
"We need people and we need feedback from trusted government sources and from our users to identify and remove some of the most problematic content out there."
Asked about carrying bomb-making instructions on sites, he said: "Whenever we can locate this material, we are removing it.
"The challenge is once it's removed, many people re-post it or there are copies of it across the web.
"And so the challenge of identifying it and identifying the difference between bomb-making instructions and things that might look similar that might be perfectly legal - might be documentary or scientific in nature - is a real challenge."
A Downing Street source said: "These companies have some of the best brains in the world.
"They should really be focusing on what matters, which is stopping the spread of terrorism and violence."
Technology companies defended their handling of extremist content after criticism from ministers following the London Bridge terror attack in June.
Google said it had already spent hundreds of millions of pounds on tackling the problem.
Facebook and Twitter said they were working hard to rid their networks of terrorist activity and support.
YouTube told the BBC that it received 200,000 reports of inappropriate content a day, but managed to review 98% of them within 24 hours.
Addressing the UN General Assembly, Mrs May will say terrorists will never win, but that "defiance alone is not enough".
"Ultimately it is not just the terrorists themselves who we need to defeat. It is the extremist ideologies that fuel them. It is the ideologies that preach hatred, sow division and undermine our common humanity," she will say.
A new report out on Tuesday found that online jihadist propaganda attracts more clicks in the UK than in any other country in Europe.
The study by the centre-right think tank, Policy Exchange, suggested the UK public would support new laws criminalising reading content that glorifies terror.
Image captionIS militants are moving to less well-known sites after being chased off mainstream social media
Google said it will give £1m to fund counter-terrorism projects in the UK, part of a $5m (£3.7m) global commitment.
The search giant has faced criticism about how it is addressing such content, particularly on YouTube.
The funding will be handed out in partnership with UK-based counter-extremist organisation the Institute for Strategic Dialogue (ISD).
An independent advisory board will be accepting the first round of applications in November, with grants of between £2,000 and £200,000 awarded to successful proposals.
ISD chief executive Sasha Havlicek said: "We are eager to work with a wide range of innovators on developing their ideas in the coming months."
A spokesman for the Global Internet Forum to Combat Terrorism, which is formed of tech companies, said combating the spread of extremist material online required responses from government, civil society and the private sector.
"Together, we are committed to doing everything in our power to ensure that our platforms are not used to distribute terrorist content," said the spokesman.
Brian Lord, a former deputy director for Intelligence and Cyber Operations at UK intelligence monitoring service GCHQ, said the UN was "probably the best place" to raise the matter as there was a need for "an international consensus" over the balance between free speech and countering extremism.
He told BBC Radio 4's Today programme: "You can use a sledgehammer to crack a nut and so, actually, one can say: well just take a whole swathe of information off the internet, because somewhere in there will be the bad stuff we don't want people to see.
"But then that counters the availability of information," adding that what is seen as "free speech" in one country might be seen as something which should be taken down in another.
Mrs May's appearance at the UN comes days before she is due to give a major speech on Brexit - a subject that led to repeated questions from journalists on her visit.
Foreign Secretary Boris Johnson was accused of undermining her plans by writing a 4,000-word newspaper article setting out his own vision for Brexit.
Speaking to the Guardian, Mr Johnson said he was "mystified" by the row his article had prompted, saying he had "contributed a small article to the pages of the Telegraph" because critics had been saying he was not speaking up about Brexit.
The figure was disclosed as part of a wider Freedom of Information request.
"Even if security vulnerabilities are identified in XP, Microsoft won't distribute patches in the same way it does for later releases of Windows," said Dr Steven Murdoch, a cyber-security expert at University College London.
"So, if the [police's] Windows XP computers are exposed to the public internet, then that would be a serious concern.
"If they are isolated, that would be less of a worry - but the problem is still that if something gets into a secure network, it might then spread. That is what happened in the NHS with the recent Wannacry outbreak."
Image captionThe NHS and other organisations were hit by the Wannacry attack four months ago
In May, ransomware malware known as Wannacry caused havoc to the National Health Service's computer systems.
Infected computers' files were digitally scrambled making them inaccessible, while staff were told to switch off other PCs to stop the infection from spreading.
Operations and other appointments had to be cancelled as a consequence.
Greater Manchester Police said it was reducing its reliance on XP "continually".
"The remaining XP machines are still in place due to complex technical requirements from a small number of externally provided highly specialised applications," a spokeswoman told the BBC.
"Work is well advanced to mitigate each of these special requirements within this calendar year, typically through the replacement or removal of the software applications in question."
Most of the UK's police forces refused to disclose their numbers in response to the Freedom of Information request, citing security concerns.
Several suggested revealing a large figure might lead them to become a target, while revealing a low tally could put others at greater risk of attack.
However, eight forces that had fewer than 10 PCs using XP were willing to confirm the fact.
Of the other forces that shared their numbers:
Cleveland Police said it had seven computers running XP, representing 0.36% of the total
the Police Service of Northern Ireland said it had five PCs still running XP, representing 0.05% of the total
the Civil Nuclear Constabulary said it had fewer than 10 computers in operation running Windows XP, representing less than 1% of the total, but it added none of them was on its live network
Gwent Police, North Wales Police, Lancashire Constabulary, Wiltshire Police and City of London Police all said they had no computers running XP
The UK's biggest force - London's Metropolitan Police Service - was among those that refused to share an up-to-date figure.
But in June it said about 10,000 of its desktop computers were still running XP.
"Disclosing further information would reveal potential weaknesses and vulnerability," the force's information manager, Paul Mayger, said.
"This would be damaging as criminals/terrorists would gain a greater understanding of the MPS's systems, enabling them to take steps to counter them."
Image copyrightGETTY IMAGES
Image captionMicrosoft says computers that still use XP should not be considered "secure"
The Met had, however, answered a Freedom of Information request on the subject in October 2015, when it said 35,640 of its desktop and laptop computers were running XP.
The BBC has appealed against its refusal to provide an update.
Police Scotland was among those to refuse to provide any numbers at all.
"The requested information could be used by a hostile party to plan and execute an attack," said Colette McGloan, its lead disclosure officer.
"Such attacks could take the form of data theft, denial of service or other deliberate disruptions."
Cumbria Police indicated the Wannacry attack had caused it to refuse the request.
"Taking into account the recent cyber-attacks within the United Kingdom, no information... which may aid cyber-attacks should be disclosed," said disclosure and compliance officer Sarah Pearce.
"The more information disclosed over time will give a more detailed account of the ICT [information and communications technology] infrastructure of not only a force area but also the country as a whole."
However, one computer security expert took issue with these excuses.
"We should be praising police forces that have made good progress in upgrading to a newer operating system and calling those who haven't to account," said Ken Munro from Pen Test Partners.
"Surely it's in everyone's interests for us not to have an incident with the police like we did with the NHS, where we only discover the scale of the problem after an attack."
'Easy to detect'
Dr Murdoch said it would not be difficult for skilled attackers to identify vulnerable systems anyway.
"There is probably not much harm in disclosure, since if someone can get access to the computers, it's relatively easy to work out which ones are running Windows XP," he said.
"There are standard toolkits that adversaries use to run all the exploits they are aware of, and if anything works, then they will go with that."
Image copyrightGETTY IMAGES
Image captionManchester's police force says it is trying to reduce its reliance on Windows XP
For its part, Greater Manchester Police said that it saw no problem in complying with the request.
"The decision to share the figures on this has been made as the simple numerical response would not pose a significant increase to our organisational risks," said a spokeswoman.
The tech giant released a video with a list of queries people ostensibly ask the popular search engine, hinting at potential areas its next smartphones will focus on.
"What's wrong with my phone's battery?" is one of them, together with "Why is my phone always out of storage?" and "Why does my phone take so many blurry photos?".
Last year's Pixel phones focused on these last two aspects a lot — their camera was one of the very best among all smartphones, and free uploads (at maximum quality) of images to Google Photos meant it was easier to keep the phone from hitting the available storage cap by simply getting rid of the physical files.
'Assume you are already hacked. At all times,' warns Carbon Black security strategist Rick McElroy
It will be a busier than usual weekend for the Equifax IT department...
When we drive past a major car crash, it's a natural reaction to slow down to take a good look.
The first thing most of us think is "Wow, that's awful", quickly followed by "I'm glad that wasn't me". Then we speed up and drive on. Most of us don't spend too much time thinking about how the wreck happened - we were just glad it wasn't us.
A similar sentiment works in cyber security. But instead of focusing on the wreck, the rest of us responsible and accountable for information security must learn and adapt from every attack. So what can we learn from Equifax?
1. Assume you are already hacked. At all times
If you build your operations and defence with this premise in mind, your chances of detecting these types of attack and preventing the breach are much greater than most organisations today.
2. The root cause of the breach was a website vulnerability but the data lived on the endpoint
I don't have any details on the initial attack other than "Equifax discovered that criminals exploited a US website application vulnerability to gain access to certain files", but too many times when it comes to data protection we focus too often on the network and not enough on the data.
When we do focus on the data, we focus on malware and not enough on attacks. Attackers will use any and all methods they can (typically the cheapest and fastest) to gain access. You need solutions that provide the full end-to-end picture of an attack.
3. Detection still takes too long
Whether it's 10 days, 30 days, 60 days or 210 days, the fact remains that is entirely too long for an attacker to be in your systems. We need to better enable defenders to detect and respond. In this particular case, their detection of the breach was shorter than most; however, the length of time the attackers had access to systems and data left 44 per cent of the population vulnerable to identity theft.
4. Visibility remains the key to detection and prevention
You cannot detect what you cannot see. It's that simple. You need the right data to detect and prevent these types of attacks. Without it, what shot do you have? If you don't have it, go get it. Remember, you are operating as if you are already breached. You wouldn't walk around your house at night without turning on the lights or a flashlight if you thought someone was in your house, would you?
5. We are all in this together
Data is linked. One breach can be leveraged for the next or the next. Think of how this data or the data from the OPM breach can be leveraged for intelligence purposes or cyber crime? We rely (unfortunately) on national insurance number to prove who we are.
Most people now know that and take protecting it semi seriously. What happens when the guardians of this data lapse? Maybe sharing a lesson from another team would have helped…maybe it wouldn't have, but we have to talk about this as a community. We really have to take the lessons learned seriously and drive change in our own programs.
6. It does not matter how big you are or the resources your team can access
I am assuming that Equifax has a larger information security budget than most organisations. As defenders, we always think, "If I only had enough money or people I could solve this problem". We need to change our thinking: It's not how much you spend but, rather, is that spend an effective use? Does it enable your team to disrupt attacks or just wait to be alerted?
7. It's time we really start to look at options for replacing the social security number
We have two factors now for all kinds of things. Except what really matters most for most people.
8. Encryption is your friend
These efforts are never easy to start and the projects take time, but stick it out — the benefits far outweigh the risks.
9. Web application security is still a thing
Markets move and focus shifts over time, but web-application firewalls (WAFs) and dynamic testing are still valuable tools. Open Web Application Security Project (OWASP) groups meet all the time. Check out www.owasp.org for lots of useful information and code.
10. Visibility is more crucial than ever
Did I say visibility twice? That's because it's that important.
First news on the NEXT Samsung Galaxy Note smartphone
The Galaxy Note 8 may well be followed by a 'bendy' successor
Samsung has revealed that its next Galaxy Note smartphone should be foldable, according to Dongjin Koh, the president of Samsung's mobile business.
The revelation comes just days before the Galaxy Note 8 arrives in the UK.
Koh told reporters at a conference in South Korea that the company hopes to release its first 'bendable' smartphone next year under its flagship Galaxy Note line. However, Koh added that there are still a number of hurdles to overcome and the release would be pushed back if they're not resolved.
"As the head of the business, I can say our current goal is next year," he said. "When we can overcome some problems for sure, we will launch the product."
This isn't the first we've heard about Samsung's foldable smartphone plans. The company has long teased bendable display prototypes, such as the 'Youm' back in 2013.
During this week's news conference, Koh also confirmed that Samsung is working with Harman to develop an AI-enabled speaker, in a bid to take on the likes of the Amazon Echo and Google Home.
It's likely that Bixby will power the upcoming smart speaker, which is codenamed 'Vega' (according to the rumours) and will likely enable users to control connected devices around the home, such as lights, TVs and thermostats, using voice control.
The speaker was allegedly set to launch alongside the Note 8, but the WSJ reported back in July that progress of the speaker had been held back by the slow roll-out of the US English language version of Bixby.