The industry news items that appear in this section cover a broad spectrum of events within the cybersecurity industry. Everything from the motivation behind cyberattacks to the latest data breach figures are discussed – along with the developments in the industry to help protect organizations against the threats of web-borne attacks and those launched through email campaigns.
The latest cybersecurity industry news should be essential reading for IT professionals – especially those within the healthcare and financial industries which appear to be the most frequently attacked vectors. By addressing some of the security flaws highlighted in our news items, it may be possible to prevent your own organization from suffering a similar attack.
The email archiving cost can be avoided, but fail to use an email archiving service at your peril. Huge fines await organizations that cannot recover emails promptly.
U.S. businesses are required are required to keep emails for several years. The IRS requires all companies to keep emails for 7 years, the FOIA requires emails to be kept for 3 years, and 7 years again for Healthcare organizations (HIPAA), public companies (Sarbanes Oxley), banking and finance (Gramm-Leach-Bliley Act) and securities firms (SEC).
While large firms are able to absorb the cost of email archiving, many SMBs look at the email archiving cost and try to save money by opting for backups instead. While it is possible to save on the email archiving cost by using backups, the decision not to use an email archiving service could prove to be very costly indeed.
Email backups can serve the same purpose as email archiving in the sense that both can be used to keep old emails. However, while an email backup can help a business protect against data loss, if ever there is a need to recovery backed up emails, companies often encounter problems.
Email backups are fine for recovering entire email accounts (mostly). In the event of a malware or ransomware attack, email backups can be used to recover entire email accounts. However, what happens if only certain emails need to be found – for eDiscovery purposes in the event of a lawsuit for example?
An eDiscovery order may be received that requires all email correspondence sent to a particular client or customer to be retrieved. Such a request may require emails from 100s of employees to be located. Those emails may date back several years. Finding all emails would be an incredibly time consuming process, and it may not actually be possible to recover all correspondence. Backup files cannot easily be searched. They are just data repositories, not a well-managed archive.
An email archive on the other hand is different. Not only can individual emails be easily recovered, the entire archive can be quickly and easily searched. If an eDiscovery request is received, all requested emails can be quickly and easily recovered. The process is likely to take minutes. The recovery of files from a backup could take weeks or even months, assuming that the task is even possible.
Email backups fail surprisingly often. The recent spate of ransomware attacks has highlighted a number of examples of data backups that have been corrupted, leaving organizations little option but to pay the attackers for a key to decrypt locked data. In the case of a ransomware infection, the ransom payment may be hundreds, thousands or even tens of thousands of dollars. However, the failure to produce email correspondence for eDiscovery or a compliance audit can be even higher.
Non-compliance with the Sarbanes-Oxley Act and other industry legislation can see fines of several million dollars issued. Only last year, Scottrade was issued with a fine of $2.6 million by the Financial Industry Regulatory Authority (FINRA). Scottrade had kept records of its emails, but not a complete record. More than 168 million emails had not been retained that should have been present in an archive. As Brad Bennett, Executive Vice President and Chief of Enforcement at FINRA explained when announcing the fine, “Firms must maintain sound supervisory systems and procedures to ensure the integrity, accuracy, and accessibility of electronic books and records.” That includes email correspondence.
The cost of email archiving is not only low compared to the cost of a regulatory fine, email arching is actually inexpensive, especially when using a cloud-based email archiving solution such as ArcTitan. Being cloud-based, emails are securely stored without the need for any additional hardware. Business can rest assured that no email will ever be lost.
In the event of an eDiscovery order, any email can be retrieved almost instantly, regardless of when the email was archived. No specific software is required as emails can be archived from Office 365 and archived messages can be accessed easily using an Outlook plug-in or even directly from the browser. Furthermore, the load on an organization’s email server can be greatly reduced. Reductions of 80% have been seen by a number of TitanHQ’s clients.
To find out more about the full benefits of email archiving and the features of ArcTitan, give the TitanHQ sales team a call today. We think you will be pleasantly surprised at how low the email archiving cost can be.
If your organization was hit with a malware or ransomware infection last year, the 2016 malware report from Malwarebytes may serve as an unpleasant reminder of 12 months best forgotten. Malware infections rose in 2016 and ransomware infections soared. In the case of the latter, there was an explosion in new variants. Malwarebytes charted a 267% increase in ransomware variants between January 2016 and November 2016. In quarter four alone more than 400 active ransomware variants were cataloged.
The 2016 malware report shows how ransomware has become the revenue-generator of choice for many cybercriminals. It is easy to understand why. Infecting computers is a relatively easy process, ransom payments are made within a matter of days, much of the process is entirely automated, and ransomware-as-a-service means no skill is even required to jump on the bandwagon and send out campaigns.
The 2016 malware report indicates ransomware accounted for 18% of malicious payloads from spam email and ransomware is the payload of choice for exploit kits, accounting for 66% of malicious downloads.
Locky was a major threat for most of the year, but in December there was a massive spike in Cerber ransomware variants, which are now the most populous ransomware family.
The cybersecurity’s company’s 2016 malware report confirms what many security professionals already know all too well. 2016 was a particularly bad year for everyone but the cybercriminals. Unfortunately, the outlook for 2017 does not look any better. In fact, it looks like it will be even worse.
Predictions have been made that will send shivers down many a system administrator’s spine. Ransomware is set to become even more aggressive. Critical infrastructures are likely to be targeted. Healthcare ransomware attacks will increase potentially placing patients’ lives at risk. Educational institutions will be targeted. No organization will be immune to attack.
Fortunately, new ransomware families will be limited in 2017. But that is only because Locky and Cerber are so effective and can easily be tweaked to avoid detection.
Then there are the botnets. The increase in use of IoT devices would not be a problem, were it not for a lack of security. Many insecure devices are coming to market which can all too easily be added to botnets. As we saw in the tail end of the year, these botnets – such as Mirai – are capable of conducting devastating DDoS attacks. Those attacks are only likely to increase in scale and frequency. As Malwarebytes correctly points out, unless manufacturers of IoT devices are better regulated and are forced to improve their security, vast sections of the Internet will come under threat.
So, it looks like all bad news for 2017. All organizations can do is purchase the technology to deal with the threats, plug security holes promptly, train staff to be aware of the threats, and shore up their defenses. The next 12 months could be a rocky ride.
Cybersecurity spending in 2016 was increased by 59% of businesses according to PwC. Cybersecurity is now increasingly being viewed as essential for business growth, not just an IT cost.
As more companies digitize their data and take advantage of the many benefits of the cloud, the threat of cyberattacks becomes more severe. The past 12 months have already seen a major increase in successful cyberattacks and organizations around the world have responded by increasing their cybersecurity spending.
The increased threat of phishing attacks, ransomware and malware infections, data theft and sabotage has been a wake up call for many organizations; unfortunately, it is often only when an attack takes place that that wake up call occurs. However, forward-thinking companies are not waiting for attacks, and are increasing spending on cybersecurity and are already reaping the benefits. They experience fewer attacks, client and customer confidence increases, and they gain a significant competitive advantage.
The annual Global State of Information Security Report from Pricewaterhouse Coopers (PwC) shows that companies are realizing the benefits of improving cybersecurity defenses. More than 10,000 individuals from 133 companies took part in the survey that provided data for the report. 59% of respondents said that their company increased cybersecurity spending in 2016. Technical solutions are being implemented, although investment in people has also increased.
Cybercriminals are bypassing complex, multi-layered cybersecurity defences by targeting employees. Organizations have responded by increasing privacy training. 56% of respondents say all employees are now provided with privacy training, and with good reason.
According to the report, 43% of companies have reported phishing attacks in the past 12 months, with this cybersecurity vector the most commonly cited method of attack. The seriousness of the threat was highlighted by anti-phishing training company PhishMe. The company’s Enterprise Phishing Susceptibility and Resiliency Report showed 90% of cyberattacks start with a spear phishing email. Given how effective training can be at reducing the risk from phishing, increasing spending on staff training is money well spent.
The same is true for technical cybersecurity solutions that reduce phishing risk. Two of the most important solutions are antispam and web filtering solutions, with each tackling the problem from a different angle. Antispam solutions are employed to prevent phishing emails from reaching employees’ inboxes, while web filtering solutions are being used to block access to phishing websites. Along with training, companies can effectively neutralize the threat.
Many companies lack the staff and resources to develop their own cybersecurity solutions; however, the range of managed security services now available is helping them to ensure that their networks, data, and systems are adequately protected. According to the PwC report, 62% of companies are now using managed security services to meet their cybersecurity and privacy needs. By using partners to assist with the challenge of securing their systems, organizations are able to use limited resources to better effect and concentrate those resources on other areas critical to business processes.
There has been a change to how organizations are view cybersecurity over the past few years. Rather than seeing cybersecurity as simply a cost that must be absorbed, it is now increasingly viewed important for business growth. According to PwC US and Global Leader of Cybersecurity and Privacy David Burg, “To remain competitive, organizations today must make a budgetary commitment to the integration of cybersecurity with digitization from the outset.” Burg also points out, “The fusion of advanced technologies with cloud architectures can empower organizations to quickly identify and respond to threats, better understand customers and the business ecosystem, and ultimately reduce costs.”
The Federal Trade Commission (FTC) is conducting a study to investigate the security update practices of mobile device manufacturers. The study is being conducted amid concern that mobile device manufacturers are not doing enough to ensure owners of mobile devices are protected from security threats.
Security Update Practices of Mobile Device Manufacturers Leave Mobile Users Exposed to Attack
A number of new and highly serious threats have emerged in recent years which allow attackers to remotely execute malicious code on mobile devices if users visit a compromised website. One of the most serious threats comes from the Stagefright vulnerability discovered last year.
The Stagefright vulnerability could potentially be exploited to allow attackers to gain control of Android smartphones. It has been estimated that as many as one billion devices are prone to attack via this vulnerability. Google released an Android update to fix the vulnerability, yet many mobile phone users were unable to update their devices as the manufacturer of their device, or the mobile carrier they used, did not allow the updates to be installed. Because of this, many smartphone owners are still vulnerable to attack.
Even when device manufacturers do update their devices there are often long delays between the issuing of the fix and the rolling out of updates. When a rollout is executed, it can take a week or more before all device owners receive their updates. During that time users are left vulnerable to attack.
The FTC wants to find out more about the delays and the rationale behind the slow rolling out of updates.
FTC and FCC Join Forces and Demand Answers from Carriers and Device Manufacturers
The FTC has joined forces with the Federal Communications Commission (FCC) for the study and has ordered smartphone manufacturers and developers of mobile device operating systems to explain how security updates are issued, the reasoning behind the decision to delay the issuing of security updates, and for some device manufacturers, why security updates are not being issued.
While the study is primarily being conducted on manufacturers of devices running the Android platform, although Apple has also been ordered to take part in the study, even though its devices are the most secure. Apple’s security update practices are likely to serve as a benchmark against which other manufacturers will be judged. Manufacturers that use the Android platform that will take part in the study include Blackberry, HTC, LG, Motorola and Samsung. Google and Microsoft will also take part.
The FTC is asking operating system developers and mobile manufacturers to disclose the factors that are considered when deciding whether to issue updates to correct known vulnerabilities. They have been asked to provide detailed information on the devices they have sold since August 2013, if security vulnerabilities have been discovered that affect those devices, and if and when those vulnerabilities have been – or will be – patched.
The FCC has asked questions of mobile phone carriers including the length of time that devices will be supported, the timing and frequency of updates, the process used when developing security updates, and whether device owners were notified when the decision was taken not to issue a security update for a specific device model.
Whether the study will result in better security update practices of mobile device manufacturers remains to be seen, although the results of the study, if published in full, will certainly make for interesting reading.
A new study has confirmed that the healthcare industry faces the highest risk of cyberattacks. Healthcare providers and health plans are being targeted by cybercriminals due to the value of patient data on the black market. A full set of medical records, along with personally identifiable information and Social Security numbers, sells for big bucks on darknet marketplaces. Health data is far more valuable then credit cards for instance.
Furthermore, organizations in the healthcare industry store vast quantities of data and cybersecurity protections are still less robust than in other industry verticals.
The survey was conducted by 451 Research on behalf of Vormetric. Respondents were asked about the defenses they had put in place to keep sensitive data secure, how they rated their defenses, and how they planned to improve protections and reduce the risk of cyberattacks occurring.
78% of respondents rated their network defenses as very or extremely effective, with network defenses having been prioritized by the majority of healthcare organizations. 72% rated data-at-rest defenses as extremely or very effective. While this figure seems high, confidence in data-at-rest defenses ranked second from bottom. Only government industries ranked lower, with 68% of respondents from government agencies rating their data-at-rest defenses as very or extremely effective.
Even though many IT security professionals in the healthcare industry believe their network and data-at-rest defenses to be robust, 63% of healthcare organizations reported having experienced a data breach in the past.
The Risk of Cyberattacks Cannot Be Effectively Managed Simply by Becoming HIPAA-Compliant
Many organizations have been prioritizing compliance with industry regulations rather than bolstering defenses to prevent data breaches. Many healthcare organizations see compliance with the Health Insurance Portability and Accountability Act (HIPAA) as being an effective way of ensuring data are protected.
HIPAA requires all covered-entities – healthcare providers, health plans, healthcare clearinghouses, and business associates of covered entities – to implement administrative, technical, and physical safeguards to keep confidential patient data secure. By achieving “HIPAA-compliance” covered entities will improve their security posture and reduce the risk of cyberattacks, but compliance alone will not ensure that data are protected.
One only needs to look at the Department of Health and Human Services’ Office for Civil Rights breach portal to see that healthcare data breaches are commonplace. Many of the organizations listed in the breach portal have implemented defenses to protect data and are HIPAA-compliant. Compliance has not prevented data breaches from occurring.
The 451 Research survey asked respondents their views on compliance. 68% said it was very or extremely effective at ensuring data were secured. The reality is HIPAA only requires healthcare organizations to implement safeguards to achieve a minimum level of data security. In order to prevent data breaches and effectively manage the risk of cyberattacks, organizations need to invest more heavily in data security.
HIPAA does not, for example, require organizations to protect data-at-rest with encryption. If the network perimeter is breached, there is often little to prevent data from being stolen. Healthcare organizations are focusing on improving network protection but should not forget to protect data-at-rest with encryption. 49% said network security was still the main spending priority over the next 12 months, which was the highest rated security category for investment.
Healthcare organizations did appreciate that investment in technologies to protect data-at-rest was important, with 46% of respondents saying spending would be increased over the next 12 months on technologies such as disk and file encryption to help manage the risk of cyberattacks.
This week has seen the release of new U.S. data breach statistics by the Identity Theft Resource Center (ITRC). The new report reveals the extent to which organizations have been attacked over the past decade, breaking down data breaches by industry sector.
ITRC has been collecting and collating information on U.S. data breaches since 2005. Since records of security breaches first started to be kept, ITRC figures show a 397% increase in data exposure incidents. This year has seen the total number of data breach incidents surpass 6,000, with 851 million individual records now having been exposed since 2015.
U.S. Data Breach Statistics by Industry Sector
The financial sector may have been extensively targeted by cybercriminals seeking access to financial information, but between 2005 and March 2016 the industry only accounts for 7.9% of data breaches. The heavily regulated industry has implemented a range of sophisticated cybersecurity protections to prevent breaches of confidential information which has helped to keep data secure. The business and healthcare sectors were not so well protected and account for the majority of data breaches over the past decade.
Over the course of the past decade financial sector ranked lowest for breaches of Social Security numbers. The largest data security incident exposed 13.5 million records. That data breach occurred when data was on the move.
At the other end of the scale is the business sector, which includes the hospitality industry, retail, transport, trade, and other professional entities. This sector had the highest number of data breaches accounting for 35.6% of all data breaches reported in the United States. Those breaches exposed 399.4 million records.
ITRC’s U.S. data breach statistics show that the business sector was the most frequently targeted by hackers over the course of the past decade, accounting for 809 hacking incidents. Hackers were able to steal 360.1 million records and the industry accounted for 13.6% of breaches that exposed credit and debit card numbers. The huge data breaches suffered by Home Depot and Target involved the exposure of a large percentage of credit and debit card numbers.
Healthcare Sector Data Breaches Behind the Massive Rise in Tax Fraud
The business sector was closely followed by the healthcare industry, which has been extensively targeted in recent years. ITRC reports that the industry accounted for 16.6% of data breaches that exposed Social Security numbers. Since 2005, over 176.5 million healthcare records have been exposed and over 131 million records were exposed as a result of hacking since 2007. That includes the 78.8 million records exposed in the Anthem Inc., data breach discovered early last year.
While hacking has exposed the most records, employee negligence and error were responsible for 371 data breaches in the healthcare industry. Healthcare industry data breaches are believed to have been responsible for the massive increase in tax fraud experienced this year. Tax fraud surged by 400 percent in 2016.
Government organizations and military data breaches make up 14.4% of U.S data breaches over the past decade, with the education sector experiencing a similar number, accounting for 14.1% of breaches. Over 57.4 million Social Security numbers were exposed in government/military data breaches along with more than 389,000 credit and debit card numbers.
The education sector experienced the lowest number of insider data breaches of all industry sectors (0.7%) although 2.4 million records were exposed via email and the Internet.
Cybersecurity Protections Need to Be Improved
The latest U.S. data breach statistics show that all industry sectors are at risk of cyberattack, and all must improve cybersecurity protections to keep data secure. According to Adam Levin, chairman and founder of IDT911, “Companies need to create a culture of privacy and security from the mailroom to the boardroom. That means making the necessary investment in hardware, software and training. Raising employee cyber hygiene awareness is as essential as the air we breathe.”
Symantec’s 2016 Internet security threat report has revealed the lengths to which cybercriminals are now going to install malware and gain access to sensitive data. The past 12 months has seen a substantial increase in attacks, and organizations are now having to deal with more threats than ever before.
Internet Security Threat Report Shows Major Increases in Ransomware, Malware, Web-borne Threats and Email Scams
The new Internet Security Threat Report shows that new malware is being released at a staggering rate. In 2015, Symantec discovered over 430 million unique samples of malware, representing an increase of 36% year on year. As Symantec points out, “Attacks against businesses and nations hit the headlines with such regularity that we’ve become numb to the sheer volume and acceleration of cyber threats.”
A new zero-day vulnerability is now being discovered at a rate of one per week, twice the number seen in 2014 and 2013. In 2015, 54 new zero-day vulnerabilities were discovered. In 2014 there were just 24 zero-day exploits discovered, and 23 in 2013.
The 2016 Internet Security Threat Report puts the total number of lost or stolen computer records at half a billion, although Symantec reports that organizations are increasing choosing to withhold details of the extent of data breaches. The breach may be reported, but there has been an 85% increase in organizations not disclosing the number of records exposed in breaches.
Ransomware Attacks Increased 35% in 2015
Ransomware is proving more popular than ever with cybercriminal gangs. In 2015, ransomware attacks increased by 35%. The upward trend in 2015 has continued into 2016. Spear phishing attacks have also increased. While these attacks are often conducted on large organizations, Symantec reports that spear phishing attacks on smaller companies – those with fewer than 250 employees – have been steadily increasing over the past five years. In 2015, spear phishing attacks increased by a staggering 55%.
Cybercriminals may now be favoring phishing attacks and zero-day exploits over spam email scams, but they still pose a major risk to corporate data security. There has also been a rise in the number of software scams. Scammers are getting consumers to purchase unnecessary software by misreporting a security problem with their computer. Symantec blocked 100 million fake technical support scams last year.
75% of Websites Found to Contain Exploitable Security Vulnerabilities
One of the most worrying statistics from this year’s Internet Security Threat Report is over 75% of websites contain unpatched security vulnerabilities which could potentially be exploited by hackers. Even popular websites have been found to contain unpatched vulnerabilities. If attackers can compromise those websites and install exploit kits, they can be used to infect millions of website visitors. Simply being careful which sites are visited and only using well known sites is no guarantee that infections are avoided.
With the dramatic increase in threats, organizations need to step up their efforts and improve cybersecurity protections. Failure to do so is likely to see many more of these attacks succeed.
The healthcare ransomware threat is not new, but the threat of attack is growing. Last week, a healthcare provider in the United States found out just how damaging a ransomware attack can be. Hollywood Presbyterian Hospital experienced a ransomware attack on February 5, resulting in part of its computer network being taken out of action for more than a week.
The healthcare provider’s electronic health record system (EHR) was locked by ransomware and a demand of $17,000 was made by the attackers to supply the security keys. This is not the first time that a healthcare provider has had to deal with a ransomware infection, but attacks on healthcare organizations have been relatively rare.
What makes this attack stand out is the fact that the ransom was actually paid. CEO Allen Stefanek said “The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom.”
The Healthcare Ransomware Threat is Very Real
Many businesses in the country have been attacked and have been forced to pay sizable ransoms in order to get a security key to decrypt their locked data. If data is encrypted by attackers, and no backup exists, there is little choice but to pay the ransom and hope that the attackers make good on their promise to supply the security keys.
There is no guarantee that the attackers will pay of course. They could just demand even more money. There have also been cases where the attackers have “tweaked” their ransomware, but accidentally broke it in the process. Even if a ransom was paid, it would not be possible to unlock the data.
Paying a ransom does not therefore guarantee that the security keys will be supplied. In this case, the attackers did make good on their promise and supplied the keys allowing business to return to normal.
The public announcement about the ransomware attack, and the disclosure of the payment of the $17,000 ransom, could potentially lead to even more attacks taking place. That is a big payment for a hacker, yet orchestrating a ransomware campaign is relatively easy, and does not require a major financial outlay. The return on investment will be significant if a healthcare provider is forced to pay a ransom. Since the ransom was paid, this may prompt many more hackers to attack healthcare providers.
Ransomware Attack Raises a Number of Questions
This attack does raise a number of questions. What many security professionals will be asking is why the hospital paid at all. In the United States, healthcare providers are required to make backups and store those data off-site. In event of emergency, such as this, a healthcare provider must be able to restore patient data. This is a requirement of the Health Insurance Portability and Accountability Act (HIPAA). It doesn’t matter what the emergency is, if computers or networks are taken out of action, the protected health information of patients cannot be lost.
The reality however, is that restoring computer systems after a ransomware attack may not be quite as straightforward. It would depend on the extent of the ransomware attack, the number of systems that were compromised, the difficulty of restoring data, and how much data would actually be lost.
Backups should be performed daily, so it is possible that 24 hours of data may have been lost, but unlikely any more. Even if data loss had occurred, it is probably that the data were stored elsewhere and could be recovered. The payment of the ransom suggests that there may have actually been an issue with the backups, or that the cost of recovering data from the backups would have been more than the cost of paying the ransom.
Dealing with the Healthcare Ransomware Threat
Regardless of the reasons why data restoration was not possible, or paying the ransom seemed preferable, other healthcare providers should be concerned. Further attacks are likely to take place, so it is essential that backups are performed regularly, and critically, those backups are tested. A backup of data that cannot be restored is not a backup. It is a false hope.
Furthermore, healthcare providers must ensure employees are trained how to spot a malware and ransomware, and software solutions should be implemented to prevent spam emails from being delivered to inboxes. Staff should be prepared, but it is best not to put the malware identification skills to the test.
Not all ransomware is delivered via spam email. Additional protections must also be put in place to prevent drive-by attacks and malvertising should be blocked. A web filtering solution, such as WebTitan, should also be installed to reduce the risk of ransomware downloads and to enforce safe use of the Internet.
There is no silver bullet that can totally negate the healthcare ransomware threat. It is impossible to make any system 100% secure, but by implementing a range of protections the risk of a ransomware infection can be reduced to an acceptable level. A disaster recovery plan must also exist that will allow data to be restored in the event that an attack does prove to be successful.
Many security professionals would like to know what is the motivation behind cyberattacks? How much do hackers earn? What actually motivates hackers to attack a particular organization? How long do hackers try before giving up and moving on, and how profitable is cybercrime for the average hacker?
A recent survey commissioned by Palo Alto Networks provides some answers to these questions and offers some insight into the minds of hackers. The results of the survey suggest that cybercrime is not as profitable as many people think. In fact, “the big payday” is actually something of a myth, certainly for the majority of hackers.
There is a common misconception that cyber attackers are tirelessly working to breach the defenses of organizations and are raking in millions from successful attacks; however, the survey results indicate otherwise.
The Ponemon Institute asked 304 threat experts their opinions on the motivation behind cyberattacks, the money that can be made, the time invested by hackers, and how attackers choose their targets.
The respondents, based in Germany, the United States, and the United Kingdom, were all involved in the threat community to varying degrees. 79% of respondents claimed to be involved in the threat community, with 21% of respondents saying they were “very involved.”
What is the motivation behind cyberattacks?
The study cast some light on what is the motivation behind cyberattacks, as well as offering some important insights into the minds of hackers. There is a threat from hacktivists and saboteurs but, in the majority of cases, attackers are not intent on causing harm to organizations. The majority of cybercriminals are in it for the money. The motivation behind 67% of cybercrime is money.
However, in the majority of cases, it would appear that there is not actually that much money to be made. If hackers were to find employment as security professionals and use their skills to protect networks from hackers, they would likely earn a salary four times as high, and they would get sick pay, holiday pay, and medical/dental insurance.
How much do hackers earn?
Anyone interested in how much hackers earn may be surprised to find out it is not actually that much. The study determined that a technically proficient hacker would be able to conduct just over 8 cyberattacks per year, and an average of 41% of those attacks would not result in the attacker receiving any compensation.
The profits from cybercrime were found to be fairly constant regardless of where the criminals were based. In the United States a single cyberattack netted the perpetrator an average of $15,638. In the United Kingdom attackers earned an average of $12,324, and in Germany it was $14,983.
So how much to hackers earn? Take away the cost of the toolkits they purchase – an average of $1,367 – and the Ponemon institute calculated the average earnings for a cyber attacker to be in the region of $28,744 per year. That figure was based on 705 hours spent “on the job” – around 13.5 hours per week. While it is clear that some hackers earn considerably more, the average hacker would be better off getting a real job. IT security practitioners earn 38.8% more per hour.
How can the survey data be used to prevent cyberattacks?
The survey probed respondents to find out how determined hackers were at breaching the defenses of companies. Surprisingly, it would appear that even if the potential prize is big, hackers tend not to spend a great deal of their time on attacks before moving on to easier targets.
72% of hackers are opportunistic and 69% of hackers would quit an attack if a company’s defenses were discovered to be strong. Ponemon determined that an attack on a typical IT security infrastructure took around 70 hours to plan and execute, whereas a company with an excellent infrastructure would take around 147 hours.
However, if a company can resist an attack for 40 hours (less than two days) 60% of attackers would move on to an easier target. Cybercriminals will not waste their time attacking organizations that make it particularly difficult to obtain data. There are plenty of much easier targets to attack.
Install complex, multi-layered defenses and use honeypots to waste hackers’ time. Make it unprofitable for attackers and in the majority of cases attackers will give up and move on to easier targets.
The cost of bot fraud in 2016 is likely to rise to a staggering $7.2 billion, according to a new report by the Association of National Advertisers (ANA).
2015 Bot Baseline study places the cost of bot fraud at over $7 billion
The study, conducted in conjunction with WhiteOps, shows that despite efforts to reduce the impact of bot fraud, criminal gangs are still managing to game the online advertising industry. Advertisers are being tricked into thinking that real visitors are viewing their adverts and are paying for those visits, when in actual fact a substantial percentage come from bots.
For some companies the losses were shocking. The highest losses were reported to have cost one company $42 million over the course of the year. However, even smaller companies did not escape unscathed. The cost of bot fraud for the least affected advertiser was $250,000.
ANA studied 1,300 advertising campaigns conducted by 49 major companies over a period of two months from August 1, 2015., to September 30, 2015. The results of the study were then extrapolated to provide the cost of bot fraud for 2016.
The study examined more than 10 billion ad impressions to determine the percentage that were real visitors. To distinguish bot visits from the human visits, ANA/WhiteOps added detection tags to the advertising campaigns under study.
The same study was conducted back in 2014 and this year’s results show that virtually nothing has changed, with just a fall in bot fraud of just 0.2% registered. The level of bot fraud has remained constant, although the cost to companies has increased.
In 2014, online advertisers were estimated to have lost around $5 billion to bot fraud, with the rise in cost of bot fraud due to an expected increase in advertising investment over the course of the next 12 months.
Last year, brands suffered an average of $10 million in losses to bot fraud. That’s an average of $10 billion paid to advertise to bots. For 25% of companies, 9% of impressions go to non-human traffic.
Methods of bot detection have improved, but they are clearly not having much of an effect on the cost of bot fraud for advertisers. As detection methods improve, bot operators have improved their ability to obfuscate their bot visits.
Unfortunately, it is difficult to distinguish bot traffic from real traffic as more residential IP addresses are being used, and the bots are becoming better at mimicking real browsing habits.
Further information has emerged on the Juniper Networks backdoor discovered last week, which suggests the NSA had a hand in the installation of a backdoor in the company’s source code.
Last week, a Juniper Networks backdoor was discovered after the company identified unauthorized code which could potentially allow hackers to gain access to secure communications and data that its customers had protected with its firewalls.
The malicious code would allow a hacker to decipher encrypted communications protected by the company’s Netscreen firewalls. It is not known at this stage how the code was installed, and whether this was an inside job or if it was inserted remotely. But what is known, is the person or group responsible installed the Juniper Networks backdoor as a result of an inherent weakness in the system. They were also helped by a coding configuration error believed to have been made by a company employee.
Juniper Networks Backdoor Installed Using NSA-Introduced Weakness
One security researcher, Ralf-Philipp Weinmann of German firm Comsecuris, has claimed that the weakness in the Dual_EC had been put there by the NSA, who championed the use of Dual_EC. It is not known whether the NSA or one of its spying partners was responsible for changing the source code, but it would appear that the NSA had, perhaps inadvertently, introduced a weakness that ultimately led to the system being compromised.
The weakness in the code that was first uncovered in 2007. The flaw was uncovered in the Dual_EC algorithm by two Microsoft researchers: Dan Shumow and Niels Ferguson. The Dual_EC algorithm had just been approved by NIST, and was used with three random number generators. Together, the encryption was believed to be secure enough to use to protect government data.
However, Shumow and Ferguson were able to demonstrate that the elliptic curve-based Dual_EC system could allow hackers to predict a random number used by the algorithm, which would make the encryption susceptible to being hacked.
Specific elliptic curve points were used as part of the random number generator. If one of those points was not a randomly generated number, and the person responsible for determining that point also generated a secret key, any holder of that key could potentially crack the encryption as it would be possible to determine the random number used by the algorithm. If that number could be predicted, the encryption could be cracked. Dan Shumow and Niels Ferguson believed this would be possible with just 32 bytes of output, if the key was known.
The flaw in Dual_EC is believed to be an intentional backdoor in the encryption that was introduced by the NSA, according to documents published by Edward Snowden. However, this was deemed not to be a problem as a second random number generator was used by Juniper. The second random number generator was supposed to have been used for the encryption, meaning even someone with a secret key would not be able to predict the random number used.
However, a coding error resulted in the original random number generator being used, rather than the second one. Someone had managed to break into the system and use their own constant, consequently, the encryption could be cracked.
The Juniper Networks backdoor has now apparently been plugged with the company recently issuing a patch to fix the problem. However, it would appear that the Juniper Networks backdoor had existed for at least three years.
According to reports from FireEye, IT security professionals do not only need to be concerned about malware attacks on computers, servers, and Android devices: Cisco router malware has now been discovered.
Cisco router malware discovered on 79 devices to date
Cisco router malware is highly sophisticated and particularly worrying. The malware can survive a restart and will be reloaded each time. Cisco router malware is also highly versatile and can be tweaked to suit an attacker’s needs. It has been found to support up to 100 different modules.
The malware was first discovered in Ukraine, although the infections have now spread to 19 different countries around the world; including the US, UK, Germany, China, Canada, India and the Philippines. At this stage it is not clear who created the malware, or what the main purpose is.
It is also not clear whether the malware has been installed via exploited vulnerabilities. It is possible that routers have been hijacked as a result of default logins not being changed, or weak passwords being set.
It is known that Cisco router malware is sophisticated and it appears to have been professionally developed. This had lead security researchers to believe that foreign governments have had a hand in its development. Should that be the case, it is likely that the main purpose of the malware is spying. While it has been known for some time that router malware is possible in theory, this is the first time that malware had been discovered to affect routers in the wild.
SYNful Knock came as a big surprise to many security professionals
The malicious software is called SYNful Knock and it serves as a fully functional backdoor allowing remote access of networks. The attacks are also silent in many cases, and hackers are able to use the malware without risk of detection.
To date, the United States has been targeted by the cybercriminals behind the malware infections, with 25 of the 79 infections discovered in the U.S. That said, the infection was discovered to have affected an ISP which was hosting 25 infected routers. Lebanon has also been targeted and 12 infections discovered in the country, while 8 of the 79 infections have been found in Russia.
The infections were discovered using ZMap. Four full scans of public IPv4 addresses were probed for signs of the malware by sending out TCP SYN packets. At this stage it would appear that only Cisco routers have been affected by SYNful Knock, but there is concern that other manufacturers’ routers may also be infected with malware. Researchers are now investigating to find out if router malware is a more widespread problem.
Did you think the Ashley Madison data breach was mildly humorous? Did you think that it serves the people right for cheating on their husband, wife or life partner? If you did, you certainly didn’t have an account with the online cheating website. Those who did simultaneously broke out in a cold sweat when they realized the website had been hacked and the perpetrator was threatening to make the data public.
Ashley Madison data breach exposed millions of confidential records
The Impact Team was the hacking group behind the Ashley Madison data breach. The company announced it had hacked the company’s database on the Tor network. The hackers claimed they would release details of the website’s patrons – people looking to have extra-marital affairs – if the company did not shut down its website. Avid Life Media Ltd., the company behind Ashley Madison, did not agree to close its business. The hackers then made good on their promise and started publishing data. A large data dump caused many of the website’s subscribers to panic.
The methods used by the attackers to gain access to the website have not been disclosed, although they were able to obtain the records of more than 30 million individuals in the attack. Unfortunately for the people who have had their privacy violated, there is little that can be done apart from take precautions with their financial accounts. Their data cannot be un-exposed and it is out there and can be used by whoever finds it. That will mean phishers, cybercriminals, identity thieves, and anyone who has taken an objection to their extra-marital activities may try to expose them.
A data breach can seriously damage a company’s reputation
This was a high profile breach due to the nature of the website and the total confidentiality that is expected and demanded by the company’s clients. A data breach such as this has potential to cause considerable damage to a brand with a marketing strategy and service that depends on privacy. However, brand reputation damage occurs following any security breach. Target, Anthem Inc., eBay, OPM. All have had their reputations damaged to varying degrees as a result of security breaches and data theft.
Many IT professionals believe that it is not a case of whether a security breach will be suffered, but when it will happen. A great many security professionals believe that most companies have already suffered a security breach. They just do not know yet.
Lessons learned from the Ashley Madison data breach
Consumers can learn lessons from the Ashley Madison data breach. They should be aware that disclosing any information increases the risk of someone else accessing that information.
The lessons for consumers are:
- If you want to do anything in secret, the Internet is probably not the best place to do it
- When disclosing information of a sensitive nature, ask yourself what the consequences would be if someone found out or exposed that information
- Would you be able to recover from a breach of that information?
- Is the service or product more or less important than it being kept a secret?
- No matter how secure a website, service, or application claims to be, there is always a risk of a security breach being suffered
- There is never a 100% guarantee of privacy online – All networks and systems are vulnerable to attack
Businesses must conduct a risk analysis
Businesses must also consider the risks to data security. Many security threats exist, and they must all be effectively managed. In order to determine what risks exist, an organization must conduct a thorough risk analysis. It is only possible to address and manage risk if a company knows what security vulnerabilities exist. Unfortunately, many hackers already know about the data security risks that are present, as well as how they can be exploited.
Once a risk is identified, unless state or federal legislation demand that the risk is addressed, a company must decide what measures to employ, and whether they are actually worthwhile.
To do that a company must calculate the annualized rate of occurrence (ARO) of a security breach via a given vulnerability, which means how often a vulnerability is likely to be exploited in any given year. Then the company must determine the repercussions from that vulnerability being exploited. How much the security breach would cost to resolve. That figure is the single loss expectancy (SLE). Once these figures are known it is possible to determine the annual loss expectancy (ALE) by multiplying those two figures. A decision can then be taken about how the risk can be managed.
Sean Doherty, Head of Research & Development at TitanHQ recently pointed out that “the notion of having ‘perfect security’ is ludicrous”. What must be done is to make it as hard as possible for systems to be infiltrated and data stolen. It is essential to implement good security measures which will be sufficient to repel attacks from all but the most skilled, motivated, and determined individuals. There is no such thing as zero risk, but it is possible to manage risk and get it down to a minimal level.
The role of a systems administrator is certainly challenging, mainly because it is constantly changing. This is the way it always has been since the role of a systems administrator was first defined. Now if you were to write down the role of a systems administrator, it would virtually be out of date before the ink had dried.
The role of a systems administrator evolves quickly. That is the very nature of the job. For many sys admins, that is what makes the job so interesting and enjoyable.
Anyone contemplating entering the professions should not be afraid of hard work. They also need to know that they will need a lot of training, and even more experience in order to excel in the position.
The role of a systems administrator over the next five years
Over the course of the next five years there is expected to be 12% growth for systems and network administrators according to the US Bureau of Labor Statistics. The last report issued by the BLS indicated a much higher growth rate, but it has now been adjusted and matches the average of all industries tracked by the BLS.
In years gone by you may have been able to get away with just having a MCSA qualification to become a good systems administrator. Today, that is not nearly enough. Not only will you need to know your way around Microsoft, you will also need to become an expert in every system used by your employer.
To excel in the role of a systems administrator you must be technically gifted, and you will need to be something of a jack of all trades. New technology is frequently introduced and part of the role of a systems administrator is to get to grips with that technology quickly. After all, you will be required to configure it, troubleshoot it, and repair it as necessary. The role of the systems administrator has grown enormously since IT has become so pervasive in business.
Fortunately, it is much easier to access training and information resources than ever before. Vendor websites provide a wealth of information, Udemy and other online learning resources can easily be accessed, and social media networks and online forums allow a sys admin to tap into the knowledge of colleagues and other sys admins when help is required.
How important is certification?
You will need an MCSA certificate to get your first job, but in order to retain your position, or even to progress and get a better paid job, further qualifications may be required. But not necessarily. They look great on a CV and can impress potential employers, but experience really does counts. If you know your stuff and have experience it does make sense to get certificated, but never underestimate the value of experience over a piece of paper. Certification is not everything.
If you want to take on the role of a systems administrator be sure to learn these technologies!
A system administer should be familiar with emerging technologies, but there are some tech trends that are an absolute must to become familiar with. These include:
- Cloud services
- Voice Over IP (VoIP)
- Technologies that can automate tasks performed by a sys admin
Automation of daily sys admin tasks
Automation of sys admin tasks will not mean you will be ultimately made redundant. It means you can use your time more efficiently. You will need to be familiar with the tools that allow you to automate a lot of tasks. They are essential for managing large, complex networks.
Without any automation of daily tasks, the role of a system administer would be an absolute nightmare. Imaging trying to keep track of system messages for a network with 1000 connected devices if you did not have a centralized logging system!
While automation is vital, it is not without its problems. Automation can make the management of a computer network easier, but on a day to day basis your job is likely to be much more complicated, especially when it comes to troubleshooting problems.
Let’s say you have a red X showing on your management dashboard. What does that red X mean? Well, it could mean any number of things. For instance:
There could be a problem with the device hosting the dashboard, or it could be caused by a routing error. It could be a cable issue, or a problem with the device itself. It may be an error with the discovery protocol, or maybe the network dashboard is faulty. Automation may save time, but it doesn’t necessarily mean it is always quicker and easier to resolve problems. It also requires a sys admin to undergo further training on the automation system itself and the equipment used to host it.
In order to be able to automate tasks you will need to learn a scripting language such as Python or Windows PowerShell. One thing is for sure. If you are planning on becoming a sys admin you will need to learn at least one scripting language before you get your first job. As for the others, they can be learned on the job.
Use of SaaS and the Cloud is Increasing
You must be familiar with cloud archiving and backups as these have proven to be invaluable in improving efficiency. Many man-hours have been cut by using the cloud for routine data operations. However, that said, there is now a need for sys admins to become familiar with APIs – Application Programming Interfaces.
With many companies now using outsourced cloud services, the sys admins role has become much more valuable. Without a sys admin, businesses would have no alternative but to believe what cloud service salespersons say. An experienced sys admin will be able to assess the services being offered and determine whether they have the required functionality to adequately serve the needs of the business.
The Two V’s – VoIP and Virtualization
Many companies are taking advantage of the huge cost savings possible by switching from traditional telephone services to VoIP. Unfortunately, while business leaders love the cost savings, users do not like the potential downtime. In fact, they can be pretty intolerant. They expect 99.999% uptime like they get with traditional telephony. It is therefore essential that sys admins understand network load dynamics and are able to successfully implement and maintain VoIP services.
Businesses nowadays use many virtual networks, which add new levels of abstraction. They also require advanced knowledge of switching and routing. It is therefore essential that a good working knowledge of virtualization is acquired.
The role of a system administrator requires these skills…
A study conducted by the Association for Information Systems (AIS) and Association for Computing Machinery (ACM), detailed in the IS 2010 Curriculum Guidelines, suggests an individual in the role of a systems administrator must have the following skills and attributes in order to succeed in the position:
- Creative, analytical, and critical thinking skills
- Excellent communication and negotiation skills
- Collaboration and leadership skills
- Good mathematical knowledge
Do you think you have what it takes? If you do, make sure you are aware of all the critical technologies. Work on your mathematical and communication skills, and make sure you expand your social network. Many companies are looking for experience, which can make it hard to get your first position. Hang in there. If you can prove your knowledge and demonstrate your skills, you should be able to get your first position. And we wish you the very best of luck with that.
Many people are using Microsoft Exchange for archiving email and some people do not archive email at all. Both are big mistakes. To find out why, it is important to know what true email archiving actually is.
What is email archiving?
Email archiving means more than just clearing your inbox. An email archive is a technical term used to describe a permanent and unalterable record of email data.
An email archive is essential for businesses and depending where a business is located, and the industry in which it operates, will determine just how important an email archive is.
An email archive is required in case of litigation, and government audits will require emails to be retrieved from an archive.
It is important to make a distinction between an email archive and an email backup because the two terms are frequently confused. Both are important, but they are used in different situations.
An email backup is a store of emails that can be recovered in case of emergency. If email data is lost, corrupted, or accidentally deleted, a copy can be recovered from a backup. Email backups will restore email accounts to the state they were in when the backup was made. Backups therefore need to be performed daily, but also weekly and monthly. Each time a backup is made, it will usually overwrite a previous copy. Email backups are not permanent.
An email archive is different. It is a permanent store of email data. An archive is searchable, and individual emails can be retrieved as necessary.
Why is it important to have an email archive?
One of the main benefits of an email archive is to reduce the storage space required for individual mailboxes. Smaller mailboxes are faster to search and retrieve information. The mailbox should only contain a working copy of email from the last few days or weeks. The remaining emails should be moved to an archive where they can be retrieved as and when necessary.
Email archiving is a legal requirement in many countries around the world. It is necessary to maintain an email archive to comply with specific industry regulations, as well as country and state laws. An archive is also required for eDiscovery. If legal action is taken against a business, it must be possible for emails, and documents sent via email, to be retrieved. These must be provided during litigation.
eDiscovery can prove extremely expensive if an email archiving solution is not used. If documents or emails are requested they can be obtained from an archive. If they need to be obtained from individual computers, the time required to locate the emails would be considerable. You may even need to search every computer in your organization. If you run a small business and have 20 computers and email accounts, this would take quite a while. If you run a business with 10,000 computers and email accounts, you could be in real trouble if you don’t have an email archive.
eDiscovery requirements mean an email archive must be searchable, and therefore the organization of the archive is critical. How so? Well, that is best illustrated with an example. An executive criminal case involving Nortel Networks resulted in 23 million pages of electronic email records being delivered by the prosecution. That is a lot of data. Unfortunately, the data was in a bit of a mess because it had not been well organized. So much of a mess that Ontario Superior Court Justice Cary Boswell ordered the prosecution to re-present it to the defense in a comprehensible format. It was described as an “unsearchable morass.”
Organizing 23 million pages of email takes a considerable amount of time. It is therefore important to get the structure of the archive correct from the outset.
Can I use Microsoft Exchange for archiving email?
Is it possible to use Microsoft Exchange for archiving email? Since the 2007 version was issued, Microsoft has included the option to use Exchange for archiving email in its journaling and personal archive functions.
However, there is a problem with using Exchange for archiving email. The journaling function does not work as a true email archive. Using Exchange for archiving email can cause many problems.
Reasons why Exchange for archiving email can cause problems for businesses
- MS Exchange does not allow email in its archive to be effectively indexed and searched
- Individual email account holders can create personal PSTs and store email on their computers
- Individual PSTs may not meet the requirements of eDiscovery
- There are no data retention configuration settings in journaling
The journaling function doesn’t really satisfy the requirements of businesses, but what about the Personal Archive? Can that be used? Unfortunately, while that does offer some enhanced email archiving functionality using the Personal Archive of Exchange for archiving email will also cause problems.
Let us take a look at the functionality of the personal email archive in the 2010 release. Exchange 2010 is better for email archiving than the 2007 release, but there are still some major issues.
In Exchange 2010, it is possible to create a mailbox archive for each email account. The purpose of the archive is to free up space in the mailbox. This is a get around for restrictive mailbox quotas. The archive is intended to be used as a medium-term store for additional emails that the user does not want to delete, but does not need in the mailbox for day to day operations. They are not really email archives, but secondary mailboxes. They lack the functionality of a true email archive.
Exchange users have two options for their personal archive, regardless of whether it is located in the production database or in the cloud. The archive can be configured to move messages automatically after a set period of time (based on retention tags) or the task can be performed manually as and when required.
There are two main drawbacks to using an Exchange personal archive. For many organizations the main disadvantage is the cost: It is necessary to purchase an enterprise client access license or CAL, or to purchase Office 2010 Professional Plus if Outlook is required.
Even Microsoft points out that it may not be wise to use personal archives in Exchange for archiving email, stating they “may not meet your archiving needs.” Does that seem an odd statement to make? That is because it is not a true email archive. It is a personal one.
Users are able to choose what information is loaded into the personal archive. They can also delete emails from the archive. That is no good for regulatory compliance and eDiscovery. There is a get around though. It is possible to meet certain eDiscovery and regulatory compliance requirements when using Exchange for archiving email. Users can be given Discovery Management roles, and can perform indexing and multiple mailbox searches. Unfortunately, the Control Panel in Exchange 2010 is difficult to use, especially for eDiscovery purposes.
Some of these issues have been addressed in Exchange 2013, but there are still eDiscovery issues. Users have far too much control over their personal archives and mailboxes. They have the ability to create their own policies and apply personal settings to their mailboxes and archives. They can potentially bypass corporate email storage policies. Unfortunately, unless Litigation Hold or In-Place Hold is applied to each and every mailbox, the administrator is incapable of overriding settings that have been applied by each user.
Is it possible to use Microsoft Exchange for archiving email if SharePoint 2013 is used?
The issue of eDiscovery has been tackled by Microsoft. It is possible to use SharePoint 2013 to perform searches of all mailboxes, but there are even problems with this added eDiscovery feature.
For a start, it is necessary to buy SharePoint 2013 and that has a cost implication. It is also necessary to use cloud storage and keep the data on an Exchange server, otherwise the In-Place Discovery tools of Exchange will not work.
There is another issue. That is the storage space you will require. Every email that has ever been sent or received through MS Exchange will need to be stored. Over time your email “archive” will become immense. Over 90% of the emails stored in that archive will never need to be accessed. It will involve paying an unnecessary cost and searching through all those emails will take a long time. Recovering emails will be particularly slow.
A true archive will remove a significant proportion of the 90% of emails that you will never need to access, and search and recovery time can be greatly reduced.
You cannot consider the archiving function of MS exchange to be a true email archive that will meet all compliance and eDiscovery needs.
The ArcTitan approach to email archiving
ArcTitan is a true email archiving solution that has been custom designed to meet compliance and eDiscovery requirements, as well as meeting data storage needs.
Key Features of ArcTitan Email Archiving
Ireland is famous for many things, but cybersecurity technology would not come top of many peoples list of famous Irish exports. However, that is fast changing thanks to an Irish cybersecurity firm called SpamTitan Technologies.
Irish CyberSecurity Company Ranks in Cybersecurity Ventures’ Top 125
SpamTitan Technologies is the top Irish cybersecurity firm according to the recent “Cybersecurity 500” list produced by Californian Security Research organization, Cybersecurity Ventures, having been ranked in position 123 out of the top 500 firms.
Cybersecurity Ventures compiled the list of the world’s top internet, email, and network security firms to help companies of all sizes pick the most appropriate IT security partners. The CV top 500 list is aimed at IT security professionals, CISOs, CIOs and VCs, and helps them to find the best products and best partners to assist them keep their confidential data secured and their networks protected from attack.
No company pays to be included in the list, and the companies are not selected on size or revenue. Instead they are chosen based the quality of the products and services offered. The list is compiled by obtaining recommendations from security experts on efficiency, effectiveness, speed, ease of implementation, and usability of the products.
Galway-based SpamTitan Technologies is an up and coming Irish cybersecurity firm that specializes in developing powerful solutions that allow small to medium sized enterprises to tackle the growing problem of hacking, data theft, and sabotage. Online criminals are targeting corporations of all sizes and many small to medium sized businesses are struggling to repel attacks. There are many possible attack vectors and the threat landscape is constantly changing, but some of the biggest threats to data and network security are targeting employees. Workers are widely regarded as the weakest link in the security chain.
SpamTitan Technologies provides powerful, cost-effective, and easy to implement email and Internet security solutions that help businesses increase protections against malicious outsiders. The company’s products help businesses reduce the risk of data breaches and network infiltration by keeping employees’ devices protected and reducing the opportunities given to cybercriminals to launch an attack.
Over the past couple of years there has been a decline in the volume of spam emails being sent. Just a few years ago over 70% of the total number of emails sent were actually spam. Botnets have recently been taken down and one of the world’s most active spammers has been arrested. This year spam email accounted for just under 50% of total email volume.
This is certainly good news. Less time is spent dealing with annoying emails. However, the risk of harm to equipment and finances does not appear to be reducing at the same rate. In fact, the risk of suffering losses due to the activities of cybercriminals is increasing. Spam email volume may be decreasing, but the quality and sophistication of spam email attacks has increased. Spam email still represents a major threat to businesses.
SpamTitan Technologies is tackling the issue. The company’s Anti-Spam solutions use two powerful anti-virus engines to scan incoming and outgoing email, with independent tests showing a catch rate of 99.7%, while the false positive rate is virtually zero. Less spam is delivered to employees’ inboxes, reducing the risk of malware and viruses being delivered.
The Irish cybersecurity firm also offers protection from the growing online phishing threat. Spam email volume is falling, but the number of malicious websites being created is increasing. Online criminals are switching their mode of attack and are targeting Internet and social media users. SpamTitan Technologies’ WebTitan web filtering solution offers protection from phishing websites and sites containing malicious code. Phishing attempts are blocked, users are prevented from visiting malicious websites, and their computers are kept free from malware. So are the networks those computers connect to.
There may not be many Irish cybersecurity firms in the list – just three in fact – but SpamTitan has been rated the hottest prospect and is the Irish cybersecurity company to watch in 2015. NetFort was also named in the list, with the Network Security monitoring company just creeping into the top 500 list at position 498. PixAlert, the IT governance and compliance firm, placed inside the top 350 global firms at position 332.
President Barack Obama is set to propose new US cybersecurity legislation this week in an effort to tackle the growing problem of cybercrime. Recent high profile hacks on government organizations have caused considerable embarrassment and there is growing concern that the US government is losing the war on cybercrime and that it can do little to prevent attacks from foreign-government backed hacking groups.
New US cybersecurity legislation will increase the government’s power to prosecute cybercriminals
New US cybersecurity legislation is seen as the answer to the government’s inability to prevent cyberattacks. Further intel is required, new powers needed to pursue criminals, and also to take action over criminal activity that takes place outside its borders.
Currently private companies are unwilling to share cyberthreat intel with the government, and improved collaboration and intel sharing with the private sector is seen as critical in the fight against cybercrime.
The proposed US cybersecurity legislation would make it much easier for the courts to take action to shut down criminal botnets and would discourage the sale of spyware. It will also expand the current Racketeering Influenced and Corrupt Organizations Act. This would give the government greater power to prosecute individuals engaged in cybercriminal activity, such as the selling or renting of botnets. It would also increase the government’s power to prosecute for the selling of government information outside US geographical boundaries.
The new US cybersecurity legislation is being pushed through in the wake of a particularly embarrassing hack of the U.S. Central Command’s Twitter account. Hackers managed to gain access to the Twitter account and post pro-ISIS content. Action was already being planned following a host of major cybersecurity incidents such as the attack on Sony, which has been attributed to a hacking team backed by North Korea. The Twitter hack was last straw for many, and will be used to help push through the new legislative package.
In the words of President Obama, the attacks “show how much more work we need to do, both public and private sector, to strengthen our cybersecurity.”
US cybersecurity legislation to offer private companies targeted liability protection
Private companies will be forced to share their cyberthreat intelligence with the government, although they will receive “targeted liability protection.” Even president Obama admitted to not knowing exactly what that meant.
The problem with sharing intelligence data is the threat of subsequent lawsuits. The liability protection is supposed to relieve any fears of legal action for the disclosure of information, although private companies may require more convincing.
Under the current proposals, private companies would be permitted to remove information about individuals before sharing data. Previous attempts to introduce new US cybersecurity legislation have failed due to the unwillingness of private companies to leave themselves wide open to litigation.
Part of the new legislative package is likely to include a new data breach notification law that would require all organizations to report hacking incidents to the government as well as requiring them to provide further information about cybersecurity breaches and data theft to consumers.
While few would argue that new US cybersecurity legislation is required, many privacy proponents are uncomfortable with the wording being used in the proposed legislative package, which they claim is intentionally vague.
It is not only firms in the financial services, education, and healthcare industries that need to be aware of business data retention laws. All companies in the United States must comply with business data retention laws, even if a firm is not covered under HIPAA, Gramm Leach Bliley, Franks-Dobbs, or SOX. The same applied for companies with a European base. The EU also has business data retention laws.
It is a crime to violate business data retention laws
Did you know that the simple act of permanently deleting an email could get you in hot water? If you delete the contents of a backup tape, or reuse the wrong one, you may even be looking at a spell in jail. How long? Up to 20 years if you do it knowingly, with malicious intent. The deletion of data is a serious crime. If a business operating in the financial sector is audited, and cannot show auditors certain emails, the SEC (Security and Exchange Commission) is likely to issue a heavy fine.
The laws covering data are complex. Different regulations call for different data retention periods. Some states have implemented data retention laws with even stricter controls than federal regulations. Some companies providing services to organizations in different business sectors, may have to comply with different laws depending on the firm they are currently working with. As a precaution, many companies in the United States decide to keep data indefinitely. Getting something wrong is too easy, and the risk for doing so too high.
All data must be backed up and stored off site. The backups must be physically secured, and should be encrypted. They must also be tamper-proof. In the event of emergency, it must be possible to restore data in its entirety, and information may need to be retrieved if a lawsuit is filed or if an audit must take place.
Not sure which data retention laws apply? Listed below is a brief summary. Please bear in mind that data retention laws are updated from time to time. At the time of publishing, the information contained in this article is up to date.
HIPAA – The Health Insurance Portability and Accountability Act (1996)
The Health Insurance Portability and Accountability Act (HIPAA) was passed in 1996 and covers healthcare providers, healthcare clearinghouses, health insurers and business associates of HIPAA-covered entities. The legislation was signed into law by Bill Clinton, and initially was intended to protect Americans and keep them covered with health insurance in the event of job loss.
Since its introduction, the legislation has been updated with stricter requirements concerning data privacy and security, and the safeguards that must be implemented to ensure that Protected Health Information (PHI) is secured at all times. Rules were introduced to protect the privacy of patients and dictate when, and to whom, data can be disclosed. HIPAA also stipulates the actions that must be taken if data is accidentally exposed. HIPAA requires medical data to be retained for a minimum of six years after the last data of treatment. However, some states require data to be kept for 7 years or longer. HIPAA is only a minimum standard. States are permitted to introduce even stricter business data retention laws.
SOX – Sarbanes-Oxley Act (2002)
The Sarbanes-Oxley Act of 2002 was introduced in the wake of the Enron scandal. Businesses must be able to verify the accuracy of their financial statements. It is all well and good for a company to report to investors and stakeholders that everything is financially in order, but they must be able to prove that is the case. In the case of Enron, the information provided was inaccurate. Deliberately inaccurate. SOX was introduced to protect investors from fraud.
Under SOX, all financial data must be retained for a minimum period of seven years, which extends to email, since email is often used to communicate account information. Email communications discussing business operations must also be retained for 7 years.
UK business data retention laws
In the UK, business data retention laws apply, although different time scales apply to different data types and formats. A UK business must keep records of accounts for 3 years, although businesses in the financial services must keep data for six years. Emails must be kept for a year, as must text messages. If you are an Internet Service Provider (ISP) you must keep logs of Internet connection data for a period of a year, and ISPs and web hosts must keep records of the websites their customers have visited for a period of four days.
European business data retention laws
In Germany, all business communication data must be retained for a period of six years, although data relating to accounts and payroll must be kept for a decade. Different laws apply throughout Europe and are beyond the scope of this post. If you want to find out more about the different business data retention laws in Europe, take a look at the guide produced by Iron Mountain on this link.
Convenient solutions for archiving old email data
Data backups should be performed on a daily basis, and those backup tapes stored securely off site for the period of time dictated by industry regulations. Email is best stored in an archive. Archives are searchable and convenient. If an email is accidentally deleted and needs to be recovered, an email archive will allow this. It is far easier restoring an email from an archive than restoring an entire email account from a backup tape.
ArcTitan is a convenient and cost-effective solution for archiving old emails. ArcTitan features a natural language browser that allows searches to be performed, and individual emails can be rapidly located and restored. If you want to ensure compliance with business data retention laws, and have the flexibility to be able to retrieve old email data for audits (and when users accidentally delete important emails), ArcTitan is the answer.
All computer users are at risk of downloading malware or computer viruses. The malicious software is sent out in bulk mail, and everyone will receive an infected email attachment or a link to a malicious website at some point. Often on a daily basis. However, individuals are not typically targeted by cybercriminals. Attacks on individuals are usually random. Business on the other hand are being targeted, and there have been an increasing number of cyberattacks on universities and other higher education institutions in recent months.
Successful cyberattacks on universities can allow criminals to steal highly valuable data. Those data can be sold on the black market to identity thieves and fraudsters for big money.
Cyberattacks on universities and educational institutions are a growing cause for concern
The reason for the cyberattacks on universities are: A) Universities store a lot of student data; B) They often store Social Security numbers which are very valuable to identity thieves; C) They use tools to facilitate collaboration, which makes attacks easier to pull off; and D) Students and professors tend to use a much wider range of software than a typical business – The more software systems are used; the higher the risk of vulnerabilities existing that can be exploited.
After a number of successful cyberattacks on universities, higher education institutions have been forced to improve defenses. They have had to re-evaluate the way they are configuring their networks and implement new policies covering Internet usage and data security.
One of the main problems is the range of software used by universities and the tools that must be offered to students to allow them to learn, collaborate, and conduct research. University networks are also highly complicated and particularly difficult to manage. It is therefore easy for security vulnerabilities to be missed.
This year, major attacks have been suffered by a number of universities in the United States, and there are still 4 months left of the year. More will undoubtedly be suffered before the year is out.
One of the biggest was suffered by the University of Maryland in February. Hackers were able to steal the data of 300,000 individuals, including their full names, dates of birth, and Social Security numbers: The three data elements that are required to commit identity theft with ease.
A data breach of a similar scale was suffered by North Dakota University. In this case, hackers gained access to a server in October, 2013, although it took four months for the data breach to be discovered. Approximately 290,000 records were obtained by a hacker in that cyberattack.
How are cyberattacks on universities conducted?
News that hackers are increasingly targeting universities is no surprise. Cyberattacks on universities have been occurring for years. In the majority of cases, those attacks are thwarted, but cybercriminals are getting sneaky and a lot better at sidestepping security defenses. Many attacks are now starting with spear phishing campaigns. Individuals are researched and cunning schemes developed to convince them to open malware-infected email attachments or visit malicious websites that steal their login credentials.
The cyberattack on North Dakota University is understood to have involved a spear phishing element. Interestingly, three IT professionals were placed on administrative leave last month. They were part of the team responsible for Internet security. According to an internal investigation, the employees “didn’t think server security was part of their job.” IT managers take note!
The cost of mitigating risk after cyberattacks on universities is considerable
The Ponemon Institute has calculated the cost of cyberattacks on universities, and estimates the cost of mitigation following a successful attack to be $111 per record. Why is the cost so high? Teams of forensic investigators have to analyze servers and entire networks to determine which data were accessed and who has been affected. The investigations are painstaking and take weeks to conduct.
Since Social Security numbers and other highly sensitive data are obtained in many of the attacks, credit monitoring services must be offered to the victims, along with identity theft resolution services. All individuals must be mailed a breach notification letter. The cost of mailing the letters alone can be considerable. Then there are class-action lawsuits filed by the breach victims. They often seek $1,000 per head in damages.
The Maricopa County Community College District data breach was estimated to have cost $17.1 million, and that doesn’t include the cost of class-action lawsuits. The University of Maryland data breach will similarly cost millions to resolve. Then there is the damage caused to a university’s reputation. It is difficult to determine what effect such a massive data breach will have in that regard.
Considering the cost of resolution, it is perhaps understandable that cyberattacks on universities are not always published. Some security experts have estimated that only half of successful attacks are actually reported.
When you consider the astronomical cost of data breach resolution, the cost of implementing cybersecurity defenses does not seem so high.
Encrypted credit cards? Don´t they already exist?
Encrypted credit cards have been around for a long time now – or, at least, credit cards with a limited amount of encryption. The magnetic strip on the back of each credit card is encrypted, and so is some of the data in the more recent chip-and-PIN cards, but basically the security offered by most encrypted credit cards is, well, basic.
When you go shopping in a store like, let´s say Target, the retailer provides an electronic terminal for you to scan “encrypted” credit cards. The terminal sends your card´s identifying data to the credit card company´s servers to verify that you have the funds to pay for your purchases.
Although the electronic transfer of information is encrypted in transit and at rest, there is a weak point in the process during which the data is decrypted into clear text so that it can be read by the payment processing software. In Target´s case it was the point of sale (POS) electronic terminal where the weak point was located.
The Target hack was on a massive scale
Hackers used the weak spot in Target´s POS electronic terminals to steal the details of 110 million credit and debit cards. Not just the credit card numbers were taken, but their PIN numbers and the card holder´s address, email and phone number – suggesting that Target´s customer database was also hacked (because encrypted credit cards do not have your email address on the magnetic strip).
Initially the retail giant tried to cover-up the hack, but as shoppers started reporting unauthorized purchases on their credit accounts, Target had to come clean and admit to the data breach. As a result, the lawsuits are flying in, Congress called the company negligent and attorney generals in every state in the country are looking into the matter.
The damage to Target – both financially and in terms of lost reputation – will be billions of dollars
Yet the hack could have been worthless
Had the retail industry adopted properly secure encrypted credit cards, the hack of Target´s database would have been worthless. Properly secure encrypted credit cards work not by storing the credit card number and PIN on the magnetic strip, but by storing a random encrypted number and a public key.
When a purchase is made at a store like, let´s say Target, the retailer does not need the credit card number or PIN, just an authorization code so that the card can be charged. So, when the credit card is used, the random encrypted number and card holder´s public key is transmitted to the credit card issuer. The credit card issuer sends back an authorization code that just the credit card would be able to read.
This “PKI encryption” at the point of sale would mean that any hacked credit card details would be worthless to the hacker. It would cost billions of dollars to introduce a system for properly secure encrypted credit cards to be used in the retail industry, and there seems to be no consensus between banks, retailers, and credit card issuers on what standards should be used.
Google already making strides towards genuinely secure payments
Google has already addressed the problem of genuinely secure payments with the introduction of its Digital Wallet. The Digital Wallet works by isolating credit and debit card data and processing it outside of the Android operating system in a chip they called the Secure Element (SE).
Google´s plan of keeping credit card data out of the reach of malware running in the operating system has really taken off. Many companies are in a battle to come out on top in the lucrative market for credit card fees. Because of a lack of consensus, few manufacturers are adding the SE chip to mobile devices or the near-field communications chips needed to radio encrypted data from encrypted credit cards to the POS terminals.
Because of the lack of consensus between banks, retailers and credit card issuers, and a lack of knowledge about which way encrypted credit cards are headed – if at all – many more retail companies are likely to experience a similar attack to that witnessed by Target.