Cybercrime is global, but the response isn’t. Governments in the west are slowly waking up to the importance of cybersecurity, and are (equally slowly) helping businesses to safeguard data and home users to protect their homes from cyberattack.
Look outside Europe and the US, though, and the picture is radically different. African countries, in particular, are underprepared for the impact of cyberattacks, and lack the governmental expertise to deal with them.
This is an issue for citizens of these countries, but also for us in the west. Poorly prepared countries act as safe havens for cybercriminals, and hackers (some of them state-sponsored) can use these countries to stage cyberattacks that directly impact users in the west.
Cybercrime: a global view
Though you wouldn’t know it from the press coverage, large cyberattacks don’t just affect the west.
Africa, for instance, actually has a huge problem with cybercrime. Recent reports from Botswana, Zimbabwe and Mozambique show that companies are increasingly falling victim to cybercrime. The global WannaCry malware attack of May 2017 hit South Africa hard, and companies in that country typically lose R36 million when they fall victim to an attack.
This situation is mirrored across the global south. It is made worse by the fact that developing nations do not have governmental policies for dealing with cyberattacks. This makes companies and home users in these countries particularly vulnerable. It also means that hackers can route their activities through these countries, which have neither the technical nor the legal expertise to catch them, let alone punish them.
Though government policies on cybercrime vary widely across the globe, many of the largest attacks of recent years rely for their success on their global reach. The Mirai Botnet, for instance, managed to infect IoT devices across a huge range of territories and countries, and this global base made it incredibly difficult to stop. Attacks like this have made the IoT one of the largest concerns among security professionals today.
Given this context, it is time for governments – in all countries and at all levels – to do more when it comes to managing cyber risk.
The approach that governments take to dealing with cyber risk is a critical factor in the success of these programs. Too often, governments take a ‘hands off’ approach, issuing advice to citizens and businesses about how to avoid falling victim to an attack, and then expecting them to protect themselves.
Every week the AT&T Chief Security Office produces a series called ThreatTraq with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them; you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. The video features Jonathan Gonzalez, Principal Technology Security, AT&T, John Hogoboom, Lead Technology Security and Tony Tortorici, Principal Technology Security, AT&T.
Jonathan: There's no such thing as an entry-level job in cybersecurity.
Tony: Jonathan, you had a story about entry-level jobs and what skills you need for day one. Do you want to go into it?
This ties to the “skill gap” notion in cybersecurity.
Miessler has other articles about the skill gap. In this article particularly, it seems he's indicating there is really no entry-level position in cybersecurity, because cybersecurity is not a single field.
Jonathan: There is this cybersecurity domain mapping that I found very interesting that breaks down every possible job that you could end up in cybersecurity and it's overwhelming. Right? So someone in this entry-level world says, "I want to do cybersecurity." The first thing they need to figure out is what area of cybersecurity?
John: This is interesting. I'm not even on this list. I don't see any incident response.
Jonathan: There is, on the bottom left, security operations and incident response, investigations...
John: Oh there it is, okay. Security operations.
Jonathan: ...forensics is my team, there's awareness, there's user education. Also, internally we have governance and risk assessment. We have career development, we have security architecture. As a person in this entry-level world, what you need to understand is you're not doing cybersecurity. You're doing something within the field of cybersecurity. And, this article particularly, some scenarios can be built and some tasks that are expected? I'm gonna pick on auditing. I learned on the job was preparing for an audit.
John: Everyone's favorite task.
Jonathan: Right. But usually, a junior entry-level person might end up on that team. And they need to understand what it means to do that and as a person hiring, that might be the thing that you want them to understand. And if they don't even know what that is then you're immediately going to eliminate them without considering their skills. They've just never done an audit. And I think what we get to in here that is not about the skill to do the audit, it's about the skills underneath you might be able to bring them up to an audit level speed.
Jonathan: And this is very interesting because it's things like understanding which kind of audit it is. Right? Is it an app
These days it seems that every time you open your favorite news source there is another data breach related headline. Victimized companies of all sizes, cities, counties, and even government agencies have all been the subject of the “headline of shame” over the past several months or years. With all this publicity and the increasing awareness of the general public about how data breaches can impact their personal privacy and financial wellbeing, it is no surprise that there is a lot of interest in preventing hacking. The trouble is that there is no way to prevent others from attempting to hack into any target they chose. Since there is a practically limitless number of targets to choose from, the attacker need only be lucky or skilled enough to succeed once. In addition, the risk of successful prosecution of perpetrators remains low. However, while you can’t prevent hacking, you can help to reduce your attack surface to make your organization less likely to be the subject of attacks.
At this point, lets differentiate between opportunistic attacks and targeted attacks. Opportunistic attacks are largely automated, low-complexity exploits against known vulnerable conditions and configurations. Ever wonder why a small business with a small geographic footprint and almost no online presence gets compromised? Chances are good they just had the right combination of issues that an automated attack bot was looking to exploit. These kinds of events can potentially end a small to medium business as a going concern while costing the attacker practically nothing.
Targeted attacks are a different story all together. These attacks are generally low, slow and persistent; targeting your organizations technical footprint as well as your employees, partners and supply chain. While targeted attacks may utilize some of the same exploitable conditions that opportunistic attacks use, they tend to be less automated in nature so as to avoid possible detection for as long as possible. In addition, they may involve a more frequent use of previously unknown exploit vectors (“zero day’s”) to reach their goals or abuse trusted connections with third parties to gain access to your organization. Ultimately it doesn’t matter which of these kinds of attacks results in a breach event, but it is important to think of both when aligning your people, processes and technology for maximum effect to mitigate that risk.
There have been many articles written regarding best practices for minimizing the risk of a cyber-security incident. Rather than recount a list of commonly cited controls, I would like to approach the topic from a slightly different perspective and focus on the top six technical controls that I feel are likely to help mitigate the most risk, provided that all the “table stakes” items are in place (i.e. you have a firewall, etc.).
Patch and Update Constantly: Ultimately the most hacker-resistant environment is the one that is best administered. Organizations are short cutting system and network administration activities through budget / staff reductions and lack of training. This practice often forces prioritization and choice about what tasks get done sooner, later or at all. Over time this creates a large, persistent baseline of low to medium risk issues in the environment that can contribute to a wildfire event under the right conditions. Lack
This spring, as the product and security operations teams at AT&T Cybersecurity prepared for the launch of our Managed Threat Detection and Response service, it became obvious to us that the market has many different understandings of what “response” could (and should) mean when evaluating an MDR solution. Customers typically want to know: What incident response capabilities does the underlying technology platform enable? How does the provider’s Security Operations Center team (SOC) use these capabilities to perform incident response, and, more importantly, how and when does the SOC team involve the customer's in-house security resources appropriately? Finally, how do these activities affect the return on investment expected from purchasing the service? However, in our review of the marketing literature of other MDR services, we saw a gap. All too often, providers do not provide sufficient detail and depth within their materials to help customers understand and contextualize this crucial component of their offering.
Now that we’ve introduced our own MDR solution, we wanted to take a step back and provide our definition of “response” for AT&T Managed Threat Detection and Response.
Luckily, Gartner provides an excellent framework to help us organize our walk-through. When evaluating an MDR service, a potential customer should be able to quickly understand how SOC analysts, in well-defined collaboration with a customer’s security teams, will:
Validate potential incidents
Assemble the appropriate context
Investigate as much as is feasible about the scope and severity given the information and tools available
Provide actionable advice and context about the threat
Initiate actions to remotely disrupt and contain threats
*Source: Gartner Market Guide for Managed Detection and Response Services, Gartner. June 2018.
Validation, context building, and Investigation (Steps 1-3)
It’s worth noting that “response” starts as soon as an analyst detects a potential threat in a customer’s environment. It stands to reason then that the quality of threat intelligence used by a security team directly impacts the effectiveness of incident response operations. The less time analysts spend verifying defenses are up to date, chasing false positives, researching a specific threat, looking for additional details within a customer's environment(s), etc., the quicker they can move onto the next stage of the incident response lifecycle. AT&T Managed Threat Detection and Response is fueled with continuously updated threat intelligence from AT&T Alien Labs, the threat intelligence unit of AT&T Cybersecurity. AT&T Alien Labs includes a global team of threat researchers and data scientists who, combined with proprietary technology in analytics and machine learning, analyze one of the largest and most diverse collections of threat data in the world. This team has unrivaled visibility into the AT&T IP backbone, global USM sensor network, Open Threat Exchange (OTX), and other sources, allowing them to have a deep understanding of the latest tactics, techniques and procedures of our adversaries.
Every day, they produce timely threat intelligence that is integrated directly into the USM platform in the form of correlation rules and behavioral detections to automate threat detection. These updates enable our customers’ to detect emergent and evolving threats by raising alarms for analyzed activity within public cloud environments, on-premises networks, and endpoints. Every alarm is aut
This past June, I attended the 2019 Bitcoin Conference in San Francisco, CA. With the various discussions on Bitcoin, Cryptocurrency, and with the chance to hang out with my favorite Crypto personalities, it was easy to lose myself in all the festivities.
While taking a break, I found a seat and decided to charge my iPhone. The station by where I was seated was a wooden cube with two standard wall sockets and two USB ports. Other users took the wall sockets, but I knew that I could charge my phone via USB.
But before I did, I remembered on the trip up to San Francisco, one of my travel companions who was with a startup known as CoinCards passed out what they called a "USB data blocker” usb adaptor."
So, what is a USB data blocker?
Chargers for modern cellphones, in my case an iPhone Lightning Charger, serve dual purposes. 1. The charge your phone and 2. They allow for the transfer of data.
Why is this important to understand?
So, take the charging cube from the conference. Consider that a hacker placed the cube with a device, say a Raspberry Pi and the USB ports that were visible from the outside where the USB ports for the PI or USB hub connected to the Pi.
Once my phone was plugged in, it could potentially expose me to whatever malware was on the Raspberry Pi. A USB data blocker
stops the data flow aspect of the charging cable and allows only the charging element.
Cybersecurity is no longer a corporate issue; we have all become our own cybersecurity firm and responsible for protecting our data.
Anti-virus and firewalls can only protect us so much; we have to do our due diligence when it comes to our safety online. Consider the computer housed behind a firewall. There can be some expectation of safety inside of the firewall, especially one that is monitored and updated.
But that firewall will not make a difference if someone brings in an infected USB device and then plugs that device into one of the company's computers. I know this from experience.
A client was confident that their firewall would protect them from cyber threats to the point where they refused to purchase anti-virus for their computers. One day, an employee brought in a USB flash drive that they had used at home and plugged it into their work computer. Turns out a file on their home computer was infected with malware and they brought it into the office. It put data on the server so that others could access it and the malware was able to spread, including to the server.
But how does this fit into our discussion on USB data blockers? If you take the phone aspect out of it, smart devices are computers. Smart devices access the internet, upload, and download and generally utilize USB to charge or sync data.
While iPhones are less likely to be the victim of malware than Android or Windows phones. We would be foolish to assume that a potential hacker could not use the lightning charger to send malicious software to the iPhone.
Apple has recently offered a bounty to anyone who can hack the iPhone OS; which means this topic has made the rounds at Apple as well.
Cyber awareness, training, and education are more critical now than ever. We can no longer assume because we have a particular type of device that we are automatically safe from harm.
As technological developments have helped turn the world into a global village, they have also made it easier to steal, extract, and communicate confidential information – leading to an increased frequency of corporate espionage.
Take Apple for example; despite deploying leading security measures and monitoring activities, the tech giant has had two espionage attempts in one year, foiled just as the convicts were departing the country.
It’s not only the Silicon Valley giants who have to face espionage. Rather, smaller businesses have more to lose. With 31% of all cyber-espionage attacks aimed at small businesses, the loss of important information can leave them facing bankruptcy.
Indeed, according to the U.S National Cyber Security Alliance, 60% of Small Medium Enterprises (SMEs) shut down within six months after a cyber-attack. What’s more, it costs approximately $690,000 and $1million for such businesses to clean up after an attack.
As Jody Westby, CEO of Global Cyber Risk says, “it is the data that makes a business attractive, not the size – especially if it is delicious data, such as lots of customer contact info, credit card data, health data, or valuable intellectual property.”
Why Are Small Businesses Targeted?
Smaller businesses are easy targets of corporate espionage, as they tend to have weaker security compared to large corporations.
The Internet Security Threat Report shows, for instance, that while 58% of small businesses show awareness and concern about a possible attack, 51% of them still have no budget allocated to prevent it.
It seems, also, that the problem is getting worse, as outlined by cyber-security experts in PwC’s Global State of Information Security Survey: small organizations, with annual revenue of under $100 million, have reduced their security budget by 20%, even as large organizations are spending 5% more on security.
Indeed, as large organizations are getting better at defending themselves against different types of espionage, criminals are “moving down the business food chain.” For example, cyber-attacks to steal information from small businesses have increased by 64% in a span of four years, as large businesses have adopted more robust security protocols.
Britain should be prepared for a Category 1 cyber security emergency, according to the National Cyber Security Centre (NCSC). This means that national security, the economy, and even the nation’s lives will be at risk. However, despite this harsh warning, UK businesses still aren’t taking proactive and potentially preventative action to stop these attacks from happening. So just where are UK businesses going wrong and can they turn things around before it’s too late?
How businesses have responded
Since Brexit was announced in June 2016, 53% of UK businesses have increased their cyber security, according to latest statistics. This is as a direct result of industry data being published which revealed that malware, phishing, and ransomware attacks will become the biggest threats once Britain leaves the EU. However, despite these efforts being made, figures reveal that British businesses have the smallest cyber security budget compared to any other country. They typically spend less than £900,000, whereas the average across the world is $1.46 million.
At risk of a Category 1 cyber attack
A Category 1 cyber attack is described by the NCSC as “A cyber attack which causes sustained disruption of UK essential services or affects UK national security, leading to severe economic or social consequences or to loss of life.” To date, the UK has never witnessed such an attack. Although, one of the most severe attacks in recent times was the 2017 NHS cyber attack which was classed as a Category 2 due to there being no imminent threat to life.
The NCSC says that they typically prevent 10 cyber attacks from occurring on a daily basis. However, as the organization believes that hostility from neighbouring nations is what drives these attacks every single day, they say that it’s only a matter of time before a Category 1 attack launches the country into chaos. NCSC's CEO Ciaran Martin states that "I remain in little doubt we will be tested to the full, as a centre, and as a nation, by a major incident at some point in the years ahead, what we would call a Category 1 attack."
Every year we survey visitors to our booth at Black Hat about trending topics. This year, we asked about ransomware and the ever-increasing complexity of our cybersecurity environment. The results are very interesting - things may be getting much better, or we may all be collectively in denial. Let's break it down.
We surveyed 145 IT security professionals. First, we wanted to check in with the industry on their experiences with ransomware. We started by asking how many have been the victim of a ransomware attack - it turns out nearly 17% had been. Sadly, this fairly large number didn't come as much of a surprise to us given the headlines we have seen in the media recently.
Of course, one of the most difficult decisions anyone will make in their IT security career is "should I pay to get my data back". If ransomware has caught you off guard, your job or even the future of your company may be at stake. While rewarding criminal behavior may be a bad idea, when the stakes are high it can be difficult to take the high road. However, almost 58% of our respondents say they would.
This led to another question. Should it be illegal to pay the ransom? After all, if we allow ransomware criminals to achieve their goal, how will we ever stop them, and how will we incentivize companies to properly prepare themselves to thwart them? People were split on this question, with about 40% saying it should be illegal, and 60% saying that it should not be. Given this result, we probably won't see the IT community lobbying for new legislation in this area.
The most surprising result came when we asked if IT security professionals were ready for a ransomware attack. In case you're new to security, the only chance you have to mitigate ransomware is to have a solid security program that closes down all the vectors you can with protection tools, and it is almost impossible for these controls to be 100% effective. The only way to recover from ransomware is to have complete backups of your systems, wipe them clean, and start over. Expert tip: make sure the backups aren't stored on your network where they can be encrypted with the rest of your data.
Surprisingly, a full 69% of our survey respondents claim that they are prepared for a ransomware attack. This is wonderful news. It's also pretty surprising, given everything we see in the press these days:
More than 40 municipalities have been the victims of cyberattacks this year (NY Times 8/22/19)
A total of 850.97 million ransomware infections were detected by the institute in 2018 (Ponemon Institute)
Ransomware attacks on businesses have increased in the first quarter of 2019, up 195% percent since the fourth quarter of 2018 (Malwarebytes)
Only time will tell if our respondents are as prepared as they feel. We hope everyone is double checking their backups in the meantime.
Switching gears, we also wanted to understand how security buyers are feeling about their security programs and their ever-increasing complexity. We're all aware of the constant innovation in security technology - every new IT innovation and new attack vector seems to bring another set of mandatory prevention controls. But the old controls (endpoint, for example) never seem to go away.
This proliferation of products came across clearly in our responses, with over 30% reporting they use at least 20 products. Industry
Google emerged as the most popular way to search the web by the 21st century, with Bing and DuckDuckGo as frequently used alternatives. But there’s loads of web content that’s delivered through the HTTP and HTTPS protocols that cannot be found through conventional means.
When cyber criminals want to exchange information on the web, the smart ones avoid the parts of the web that are easy to track. Innovations in networking technology led to the creation of a part of the web that can only be reached by fully encrypted anonymizing proxy networks. Are those cyber criminals doing anything your business should be worried about? Deep Web and Dark Web are popular buzzwords these days, so what does it all mean?
The Deep Web and the Dark Web sound elusive and esoteric, but I can make it all easy to understand.
Deep Web versus Dark Web: What's the difference?
People very frequently confuse the Deep Web with the Dark Web and vice versa.
The Deep Web consists of all of the parts of the web which aren’t indexed by popular search engines like Google or DuckDuckGo. It’s not all a criminal red light district zone, in fact the majority of it is pretty innocuous. I made Angelfire and GeoCities websites as a 90s’ teen, years before Facebook, Google, or YouTube ever existed. I’d be a bit embarrassed for you to find the Spice Girls fan site I made back then, but it’s all perfectly legal and Safe For Work.
Most of the Deep Web is just stuff that’s too old or obscure to be found by one of Google’s web crawler bots that they use to help maintain their search engine. You can use your regular web browser to access much of the Deep Web, but you may need to use web archives in order to find what you want. The Wayback Machine is great for this purpose.
The Dark Web is also a part of the Deep Web. The Dark Web is the part of the Deep Web that can only be accessed through encrypted anonymizing proxy networks such as Tor or I2P. You will need to install special software on your PC or phone in order to use them. Those proxy networks are great for purposes like helping journalists in hostile territories report on war and politics. But because those proxies use cryptography and lots of relays in order to make servers and endpoints difficult to track, they also help to facilitate cyber crime.
Think of it this way. All Dark Web is Deep Web, but not all Deep Web is Dark Web, as all apples are fruit, but not all fruit are apples. All of the internet that’s outside of proxy networks like I2P or Tor is often referred to as the “clearnet,” in contrast with the “darknet.”
Types of cyber crime in 2019
Cyber criminals will often choose to use the Dark Web in order to engage in their malicious activities. The Dark Web is full of illegal marketplaces and forums where criminal activity is advertised and communicated about. If you install I2P software or the
Cyberbullying and cybersecurity incidents and breaches are two common problems in the modern, internet-driven world. The fact that they are both related to the internet is not the only connection they have, however. The two are actually intimately connected issues on multiple levels.
It may seem like an odd notion. After all, cyberbullying typically involves using technology to harass a person (often overtly), while cybersecurity involves preventing hackers and identity thieves from accessing information and then simply getting away without being caught. While the two have similarities in that they both involve malicious actors online, the motives are quite different. However, the points of connection between these two topics are worth exploring.
Defining cyberbullying and cybersecurity
When comparing terms like these, it can be helpful to lay out a definition for each in order to make sure everyone is on the same page. Cyberbullying is, simply put, bullying a person through technological outlets, such as social media or texting. Cybersecurity is the protection of sensitive data (and therefore people) using specific measures.
The modern world now knows that bullying can go beyond simple physical abuse; it can take place digitally as well. Cyberbullying can involve intimidating, deceiving, harassing, humiliating, and even directly impersonating a person. Since it takes place online, it also isn’t restricted to places like school or social gatherings. Due to the ubiquitous nature of the internet, cyberbullying can follow victims throughout every aspect of their lives.
It also typically involves the common issue of cyberstalking. While it may be cute or entertaining to learn about a new friend or potential partner by following their goings-ons on Facebook, the issue of cyberstalking in a cyberbullying context is serious and is one of the key things that connect it to cybersecurity.
While cybersecurity is a broad topic, it’s worth taking the time to highlight some of the more specific areas of the practice that directly relate to the issue of cyberbullying.
Identity theft is the poster child of cybercrime, and it’s a threat that’s used in cyberbullying often. In addition to defrauding an individual by accessing or opening new lines of credit in their name, cybercriminals may impersonate an individual for other motives. For instance, if a cyberbully is stalking someone else, they may hack into their user account on a game, an email address, or social media account in order to impersonate them. This allows them to get information from their victim’s friends and family or harass them.
Another way a cyberbully can be a cybersecurity threat is by using malware to hack
With cybercrime on the rise, companies are always looking for new ways to ensure they are protected. What better way to beat the hackers than to have those same hackers work FOR you. Over the past few years, corporations have turned to Bug Bounty programs as an alternative way to discover software and configuration errors that would’ve otherwise slipped through the cracks. These programs add another layer of defense, allowing corporations to resolve the bugs before the general public is made aware or harmed by the bugs.
Bug Bounty programs allow white-hat hackers and security researchers to find vulnerabilities within a corporation’s (approved) ecosystem and are provided recognition and/or monetary reward for disclosing them. For the corporation, this is a cost-effective way to have continuous testing, and when a vulnerability is found, the monetary reward can still be significantly less than a traditional pen test.
The idea of a bug bounty program didn’t immediately take off. It took Google launching their program in 2010 to really kickstart the trend, but according to HackerOne, by the end of 2018, over 100,000 total vulnerabilities have been submitted and $42 million has been paid out. In 2018 alone, an estimated $19 million was rewarded, which is more than all of the previous years combined. The vulnerability that was reported the most was cross-site scripting, followed by improper authentication, with a high number of big payouts recorded in the financial services and insurance sectors and information disclosure vulnerabilities rounds out the top three, with most of these bugs being reported in the electronics and semiconductor industry.
Today, about 6% of the Forbes 2000 global companies have Bug Bounty programs, including companies like Facebook, United Airlines, and AT&T. AT&T was the first telecommunication company to announce the launch of their program in 2012. AT&T’s Bug Bounty program has a fairly wide scope, allowing almost any vulnerability found within their environment to be eligible for a reward. As other telecommunication companies started their program, AT&T was used as a resource to provide insight on what works well and what doesn’t.
While there are hundreds of bug bounty programs, no two programs are exactly alike. There has been a big shift away from internally managing these programs to outsourcing to third parties. Although these programs are most talked about in the technology industry, organizations of all sizes and industries have started having Bug Bounty programs, including political entities.
Both the European Union and the US Department of Defense have launched programs in recent years. The EU launched their program in January 2019, inviting ethical hackers to find vulnerabilities in 15 open source projects that the EU institutions rely on, providing a 20% bonus if the hacker
At Black Hat 2019 I had the pleasure to meet some AT&T colleagues who are now my new InfoSec buddies! I met Marc Kolaks and Don Tripp from the Office of the CSO at the AT&T Cybersecurity booth.
They told me about the weekend event they volunteering for at Defcon. So, being nosy I had to hear all about it and get some pics from the event (couldn’t attend myself due to date conflict with Diana Initiative.) First some cute kid pics!
R00tz started back in 2011; originally called Defcon Kids. It is an event designed specifically for kids to introduce them to “White Hat” security. It includes hands on events, talks, and contests that are specifically geared for a younger crowd, including lock picking, soldering stations, capture the flag contests, technical talks and more. One of the keys to the success of the event is that all these activities are specifically designed for and targeted for a young audience and include an Honor Code.
Some of the key aspects of the Honor Code include the following values:
Only do good
Always do your best
Go big & have fun!
In general, the kids are encouraged to explore, to innovate and to learn.
The “rules” that govern R00TZ participation include:
Only hack things you own
Don’t hack anything you rely on
Respect the rights of others
Know the law, the possible risk, and the consequences for breaking it
Find a safe playground
AT&T participation: past and present
AT&T has participated in the r00tz event for the last few years. We’ve grown from being only a financial sponsor into actively participating.
Patrick McCanna & Marc Kolaks were the key individuals to get ATT involved. Patrick provided the contacts, and Marc arranged for the sponsorship. They saw a fantastic opportunity for AT&T to make a positive impact in the otherwise nefarious realm of hacking.
One of the major contributions that AT&T provides to the r00tz event is the “Junk Yard”
This event provides piles of old electronic equipment ranging from cell phones to routers to typewriters. The kids are provided with hand tools, and eye protection (this year some AT&T Cybersecurity sunglasses were provided), and are allowed / encouraged to dis-assemble all this equipment simply to “see what’s inside”.
In addition to the Junk Yard we’ve created various hands on activities ranging from penetration testing demonstrations to a customized version of the Hacker Games and Link buster in order to teach security “best practices” in a fun environment. Along with the “games” we also hosted MIT’s SCRATCH programming environment to allow the kids to experience computer programming on a fun an easy to understand platform.
Another addition to this year’s event included providing information to parents on AT&T’s ASPIRE program and information on STEM (Science, Technology, Engineering & Math) opportunities for th
This research project is part of my Master’s program at the University of San Francisco, where I collaborated with the AT&T Alien Labs team. I would like to share a new approach to automate the extraction of key details from cybersecurity documents. The goal is to extract entities such as country of origin, industry targeted, and malware name.
The Open Threat Exchange is a crowd-sourced platform where, where users upload “pulses” which contain information about a recent cybersecurity threat. A pulse consists of indicators of compromise and links to blog posts, whitepapers, reports, etc. with details of the attack. The pulse normally contains a link to the full content (a blog post), together with key meta-data manually extracted from the full content (the malware family, target of the attack etc.).
Figure 2 is a screenshot of an example of a blog post that could be contained in a pulse:
Figure 2: Snippet of a blog post from “Internet of Termites” by AT&T Alien Labs
Figure 3 is a theoretical visualization of our end-goal - the automated extraction of meta-data from the blog post which can be added to a pulse:
Figure 3: The same paragraph with entities extracted
This kind of threat intelligence collection is still manual with a human having to read and tag the text. However, unsupervised machine learning techniques can be used to extract the information of interest. We created custom named entities trained on domain-specific data to tag pulses. This helps speed up the overall process of threat intelligence collection.
Approach and Modeling
We collected the data by scraping text from all the pulse reference links on the OTX platform. We focused on HTML and PDF sources and used appropriate document parsers. But, since the sources are not consistent, we put in place many rule-based checks to clean the text. For example, tags like ‘IP_ADDRESS’ and ‘SHA_256’ replace IP addresses and hashes. We did not omit them to preserve the word sequence and any dependencies. Next, we had the large task of annotating the documents. But SpaCy’s annotation tool, Prodigy, makes the process much less painful than it has been before.
Figure 4 below is an example annotation where “Windows” is labeled as a country rather than “China” in the sentence. The confidence score is very low for this annotation, and we can reject this annotation.
Figure 4: Example annotation from Prodigy
SpaCy's built-in Named Entity Recognition (NER) model was our first approach. The current model architecture is not published, but this video explains it in more detail. We have also built a custom bidirectional LSTM which has gained popularity in recen
By 2025, it is estimated that there will be over 64 billion IoT devices around the world, with an increasing number being used around the home by mainstream consumers. Although these devices offer convenience and ease, homeowners need to be responsible for ensuring their security and safe upkeep. In the same way that homeowners add security systems to protect the physical aspects of a property, taking steps to improve the security of IoT devices will keep connected smart systems safe from attack.
Guarding against invasion
Combining smart technology with security creates a simple, integrated ecosystem to protect and monitor the home. A comprehensive home surveillance system will offer defense against physical intruders, but it is equally important to ensure that all smart systems and devices are also protected. In the past, intruders could only break into a home by physically smashing a window or breaking a lock, now they can gain access through a light bulb. This is possible because systems are connected, and, if a Wi-Fi password is insecurely stored on just one device, hackers could potentially view a credit card transaction taking place on another.
Reducing unnecessary complexity is one of the best ways to keep smart technology secure. The attack surface of any operating system is the sum of all potential entry points where exposure to security risks is at its highest. The attack surface is increased as more devices, services and applications are added to a system. By ensuring entry points are only available to trusted users and disabling any unused or unnecessary services, there is less chance of infiltration.
With physical security systems and surveillance cameras, homeowners can clearly see that their physical property is secure from intrusion. However, a compromise in cybersecurity is harder to spot. By being more aware of vulnerable areas, and adding extra protection to weak spots, security is improved throughout the home.
Browsing privately ensures that no one spies on what you do online. Thanks to the tech growth that the world has experienced over the years, you either choose to browse the entire web anonymously or if you only need to hide from specific spies, you can opt to make all of your visits to a single website anonymous.
Without this anonymity, anyone who chooses to stalk you will easily do so by closely watching your browsing habits on a daily basis. The spy can be any person or entity, from your partner, parent, a business rival, or even the government. If you doubt that, use a VPN today and you will be amazed by how many private servers you can access without consent.
What are the benefits of browsing anonymously?
If you are like many people, you definitely don’t appreciate it when others invade your privacy unannounced. So, the primary benefit of browsing anonymously is to protect your privacy. From this benefit stems many other related benefits. They include:
When searching for a new job, sometimes you browse through job advertisements using your office computer. The surest way of blocking your current employer from spying on your web searches from the company servers is to browse anonymously.
If you have been searching for prescription drug information of late, the last thing you want is a drugs eCommerce store to track down your IP, collect your email without your consent, and start sending you spam about a new medicine. Anonymous browsing will keep them off no matter how they try to access your browser.
Many countries have strong and restrictive web policies that you can only duck or bypass using anonymous browsing.
It is cool to browse knowing that no one is spying on you. It gives you peace of mind.
Maybe there are sites you visit often but would not want a family member to find out. Problem solved.
It is common knowledge that we are all under constant surveillance by government snoops, like the FBI or NSA. How better to hide from them than to browse anonymously?
If you are a travel enthusiast who loves searching the web for flight prices and travel destinations, travel companies might know how desperate you are to travel and decide to hike prices. Blocking them from seeing your search history is vital for your travel budget.
How can you browse the internet anonymously?
Buy a VPN (a virtual private network) and protect your data from hackers, government agencies, and rogue internet service providers. VPN masks your IP address so that no surveillance can identify you through your web traffic.
Browse in a private window
Maybe you aren’t interested in keeping hackers or government surveillance at bay, all you want is to hide critical information from your family members or colleagues who happen to have access to your browsing device. Browsing on a private window means that your search queries aren’t saved in the browser. Even if someone goes looking for them in your history, they won’t find them.
The difference between this search engine and Google or Bing is that it doesn’t sell your data to 3rd parties. In that case, you will not receive any targeted ads or be tracked through your browsing history. In addition, even when you see ads as you browse, they most probably do not carry any tracking cookies and, for what it’s worth, they are based on the immediate search queries that you have typed in recently. They aren’t based on a user profile, like what engines such as Google create for
Deepfakes are the latest moral panic, but the issues about consent, fake news, and political manipulation they raise are not new. They are also not issues that can be solved at a tech level.
A deepfake is essentially a video of something that didn’t happen, but made to look extremely realistic. That might sound like a basic case of ‘photoshopping’, but deepfakes go way beyond this. By training AI algorithms on vast libraries of photographs taken of famous people, the videos produced in this way are eerily real, and worryingly convincing.
As a result, plenty of analysts are worried that deepfakes might be used for political manipulation, or even to start World War 3.
Solving these problems is going to be hard, in part because they are an extension of problems that are already evident in the rise of fake news, faked videos, and misinformation campaigns.
What are deepfakes?
If you’ve never seen a deepfake, do a quick Google search for one, and watch the video. If this is your first time, you’re going to be pretty impressed, and possibly quite disturbed.
These videos are made by AIs. Deepfake authors collect a database – as large as possible – of photographs taken of a person, and then an AI is used to paste these on to a video using a technique known as generative adversarial networks. Because AIs are developing at a rapid rate, so is the sophistication of deepfakes.
It will come as no surprise to learn that deepfakes were developed first for porn, to produce videos with Hollywood stars’ faces over other (women’s) bodies. But since then, the technology has increasingly been used to produce political videos, and by Hollywood itself.
The threat of the technology is certainly real, but let’s get one thing out of the way first: if you are reading this and are worried that you might be the subject of a deepfake, you don’t need to worry (at least yet). The technology relied on millions of photographs of a person being publically available, and unless you are a celebrity that’s probably not the case. Regardless of your celebrity status, however, the best VPN services are a cost effective ($5-10 monthly) defensive measure to consider for all your internet-connected devices
Why does AT&T Cybersecurity get me so excited on behalf of the mid-sized enterprises that make up the bulk of business around the globe? Well, one example I like to share is from a bicycle manufacturer I had the pleasure of visiting a few years ago. As a cycling enthusiast myself, I know these manufacturers are true experts, with deep knowledge and passion for the businesses they run and technology they develop. Unsurprisingly, they were dismayed about the need to also become experts in cybersecurity.
Even if they were experts, it still might not help. Could they really afford to follow the security blueprint defined by global banks and other elite security teams? According to a Deloitte survey, large enterprises spend thousands per employee and up to hundreds of millions of dollars per annum on cybersecurity, often deploying dozens or even hundreds of expensive and sophisticated security solutions along the way.
For our bike manufacturer, it’s impossible to wade through all of the solutions on offer from the thousands of cybersecurity vendors out there. Their business is at risk through no fault of their own and the “solution” to mitigating that risk is beyond reasonable allocation of resources.
Mind you, it’s not just the bicycle company in this race. There’s the contract manufacturer that actually assembles the bikes, the advertising agency that promotes them, the distributors that get them into stores and perhaps 20 other major partners and subcontractors who support the core business. And this is just one major bicycle brand! There are millions of other mid-sized enterprises around the globe with the exact same problem. Every business, including the Fortune 500, would relish the opportunity to be more efficient in cybersecurity and to put more money back into the business. But for mid-sized companies, who don’t have the same resources to protect themselves, it’s a matter of survival.
Our bicycle brand should be focused on engineering the perfect machine to break a 36mph Tour de France stage speed, not on cybersecurity. This shouldn’t be something that soaks up resources and diverts attention from the core business. That’s precisely why AlienVault automated threat detection and streamlined response, and why we continue to focus on making security more accessible as AT&T Cybersecurity.
What gets me excited for customers like the bicycle manufacturer is the ability to do all that and more, on a much grander scale, because of what AT&T brings to the table. With a core mission of connecting people where they live and work for more than 140 years, security is in AT&T’s DNA. Ever since there was something of value carried over a network, AT&T has been a leader—including what is now called cybersecurity. Serving more than 3 million companies globally from the smallest business to nearly all the Fortune 1000 has given AT&T unrivaled visibility into the threats and needs of business customers. And as a trusted advisor that provides countless integrated business solutions around the globe, AT&T has assembled a broad portfolio of nearly all of the leading security vendors to help in the mission.
We now have the opportunity to integrate AT&T’s unparalleled threat intelligence, AlienVault’s proven strengths in automation, and the world’s best cybersecurity solutions into one unified platform that eliminates cost and complexity for millions of companies both large and small. The bicycle manufacturer can choose to use the platform to manage security themselves, outsource the work completely, or utilize a collaborative model that utilizes collective expertise and capabilities. This is enabled through the AT&T consulting and managed services teams or through
Introduced to the market nearly two decades ago, Virtual Private Networks (VPNs) are a uniquely enduring cornerstone of modern security. Most large organizations still employ a VPN solution to facilitate secure remote access, while millions of consumers rely on similar products to bolster their online privacy, secure public Wi-Fi connections, and circumvent site blocks.
By now, most of us know that a VPN assigns us a new IP address and transmits our online traffic through an encrypted tunnel. But not all VPNs are created equally. Depending on the protocol in use, a VPN might have different speeds, capabilities, or even vulnerabilities.
Encryption protocols and ciphers are at the heart of VPN technology, determining how your ‘secure tunnel’ is actually formed. Each one represents a different solution to the problem of secure, private, and somewhat anonymous browsing.
Though many of us are aware of how a VPN generally works, it’s common to get lost on the fine details of the technology due to the sheer complexity of the subject. This confusion is reinforced by the fact that many VPN providers can be slapdash to the point of misleading when describing the type of encryption that they use.
This article will provide a simple point of reference for those who want to explore the technologies driving their VPN service. We’ll review different types of encryption, the main VPN protocols available, and the common ciphers behind them.
In explaining the confusing array of terms commonly used by VPNs and other security products, you will be in a stronger position to choose the most secure protocol and assess the claims made by VPN providers with a much more critical eye.
Types of encryption
At a very basic level, encryption involves substituting letters and numbers to encode data so that only authorized groups can access and understand it.
We now use powerful algorithms called ciphers to perform encryption and decryption. These ciphers simply denote a series of well-defined steps that can be followed repeatedly. The operation of a cipher usually depends on a piece of auxiliary information called a key; without knowledge of the key, it is extremely difficult – if not impossible – to decrypt the resulting data.
When talking about encryption today, we generally refer to a mixture of cipher and key-length, which denotes the number of ‘bits’ in a given key. For example, Blowfish-128 is the Blowfish cipher with a key length of 128 bits. Generally speaking, a short key length means poor security as it is more susceptible to violation by brute-force attacks.
A key length of 256 bits is the current ‘gold standard’. This cannot be brute-forced as it would take billions of years to run through all the possible bit combinations. There are a few key concepts in the world of encryption:
This is where the key for encryption and decryption is the same, and both communicating parties must possess the same key in order to communicate. This is the type of encryption used in VPN services.
Here, software is used to create sets of public and private keys. The public key is used to encrypt data, which is then sent to the owner of the private key. They then use this private key to decrypt the messages.
Handshake encryption (RSA)
Securely connecting to a VPN server requires the use of public-key encryption through a TLS handshake. While a cipher secures your actual data, this handshake secures your connection.
That’s why, in addition to advanced security protection and prevention controls, organizations need a way to continuously monitor what’s happening on their networks, cloud environments, and critical endpoints and to quickly identify and respond to potential threats. But, for many businesses, building an effective threat detection and incident response program is costly and challenging, especially given the industry’s shortage of skilled security professionals.
AT&T Managed Threat Detection and Response
With these challenges in mind, AT&T Cybersecurity is excited to introduce AT&T Managed Threat Detection and Response, a sophisticated managed detection and response service (MDR). The new service brings together people, process, and technology in a virtually seamless way to accelerate and simplify threat detection and response, helping organizations to detect and respond to advanced threats before they impact the business. AT&T Managed Threat Detection and Response builds on our 30 years of expertise in security operations, our award-winning unified security management (USM) platform for threat detection and response, and the unrivaled visibility and threat intelligence of AT&T Alien Labs. With advanced features like 24 x 7 proactive security monitoring, threat hunting, security orchestration, and automation in one turnkey solution, businesses can quickly establish or enhance their security program without the cost and complexity of building it themselves.
“We couldn’t do the things that AT&T brings to us for four times the cost of what we’re paying now,” said Stephen Locke, CIO, NHS Management, LLC. “Even if we did, we wouldn’t have the same level of expertise and intelligence of what’s happening in the cybersecurity world.”
With AT&T Managed Threat Detection and Response, critical IT assets are monitored by one of the world’s most advanced security operations centers (SOC). The AT&T Threat Managed Detection and Response SOC has a dedicated team of trained security analysts who are solely focused on helping organizations to protect their business by hunting for and disrupting advanced threats around the clock. Our SOC analyst team not only handles daily security operations of monitoring and reviewing alarms to reduce false positives, but they conduct in-depth incident investigations. These provide incident responders with rich threat context and recommendations for containment and remediation, helping security teams to respond quickly and efficiently. AT&T Cybersecurity SOC analysts can even initiate incident response actions, taking advantage of the built-in security orchestration and automation capabilities of the USM platform or even sending incident response specialists onsite if the situation requires.
Stephen Locke added, “Adding AT&T Managed Detection and Threat Response reduced my
Across the board, security teams of every industry, organization size, and maturity level share at least one goal: they need to manage risk. Managing risk is not the same as solving the problem of cybersecurity once and for all, because there is simply no way to solve the problem once and for all. Attackers are constantly adapting, developing new and advanced attacks, and discovering new vulnerabilities. Security teams that have accepted the post-breach mindset understand that cybersecurity is an ongoing chess match with no end. They focus on reducing risk as much as possible through visibility and automation, instead of searching for a one-size-fits-all solution.
Incident response plays a key role in effectively reducing risk. In a breach, the average cost per lost or stolen record is $148, and having an incident response team reduces this cost by almost 10%. Because of the human component of critical thinking that goes hand-in-hand with response and resolution, incident response is not something you can totally automate. But that doesn’t change the fact that it is something organizations absolutely need in the event of a breach. Despite this, 77% of IT professionals say their organization does not have a formal cybersecurity incident response plan. Instead, organizations respond in an ad-hoc fashion to threats without digging for the root cause of the incident and resolving it. Incident response is an under-utilized asset that has organizational and defensive, immediate and long-term benefits.
An incident response team is accountable for having a plan to handle an incident and implementing it. They’re prepared to mitigate damage, identify the root cause of an incident, and communicate with the proper channels. But they are also responsible for another crucial part of incident response: the post-incident review.
Post-incident review is about identifying every aspect of an incident down to its true root cause. It answers critical questions like what happened before, during, and after the attack. By answering these questions, organizations can ensure the same attack doesn’t happen twice. They review the attack, and identify and close all gaps in their defense that the attacker leveraged.
However, this leaves post-incident review with a major problem.
It takes organizations an average of 191 days to identify a data breach. For a post-incident review that does its due diligence, this means potentially going all the way back in time through at least 191 days’ worth of data to find the root cause of the attack. Consider all of the data in your environment that has come and gone over the course of 191 days. How many investigations have your analysts performed in that time?
Every week the AT&T Chief Security Office produces a series called ThreatTraq with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them; you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. The video features Jaime Blasco, VP and Chief Scientist, AlienVault, Stan Nurilov, Lead Member of Technical Staff, AT&T, and Joe Harten, Director Technical Security.
Stan: Jaime. I think you have a very interesting topic today about threat intelligence.
Jaime: Yes, we want to talk about how threat intelligence is critical for threat detection and incident response, but then when this threat intelligence and the threat actors try to match those indicators and that information that is being shared, it can actually be bad for companies. So we are going to share some of the experiences we have had with managing the Open Threat Exchange (OTX) - one of the biggest threat sharing communities out there.
Stan: Jaime mentioned that they have so many threat indicators and so much threat intelligence as part of OTX, the platform.
Jaime: We know attackers monitor these platforms and are adjusting tactics and techniques and probably the infrastructure based on public reaction to cyber security companies sharing their activities in blog posts and other reporting.
An example is in September 2017, we saw APT28, and it became harder to track because we were using some of the infrastructure and some of the techniques that were publicly known. And another cyber security company published content about that and then APT28 became much more difficult to track.
The other example is APT1. If you remember the APT1 report in 2013 that Mandiant published, that made the group basically disappear from the face of earth, right? We didn't see them for a while and then they changed the infrastructure and they changed a lot of the tools that they were using, and then they came back in 2014. So we can see that that threat actor disappeared for a while, changed and rebuilt, and then they came back. We also know that attackers can try to publish false information in this platform, so that's why it's important that not only those platforms are automated, but also there are human analysts that can verify that information.
Joe: It seems like you have to have a process of validating the intelligence, right? I think part of it is you don't want to take this intelligence at face value without having some expertise of your own that asks, is this valid? Is this a false positive? Is this planted by the adversary in order to throw off the scent?
I think it's one of those things where you can't automatically trust - threat intelligence. You have to do some of your own diligence to validate the intelligence, make sure it makes sense, make sure it's still fresh, it's still good. This is something we're working on internally - creating those other layers to validate and create better value of our threat intelligence.
Jaime: The other issue I wanted to bring to the table is what we call false flag operations - that's when an adversary or a threat actor studies another threat actor and tries to emulate their behavior. So when companies try to do at
Here is a short communication tip that may help you in your daily interactions. How often have you “resent” an E-Mail? How often have you told a person that you will “send an invite”?
You may be wondering why I am bringing this up in a post usually reserved for cybersecurity. Am I just being overly pedantic? Am I just a rigid grammarian? One could easily assert that (and my friends do so all the time, so feel free to jump on that bandwagon). However, there is more to it than that.
While we tend to use the word “resent” to indicate sending a message again, as yet, there is no recognized usage in the English language. The same is true for the word “invite”. It is not yet recognized in the way we are using it.
Resent means to express ill-will or annoyance, so when you tell a person that you “resent an Email”, they may wonder what they did wrong to generate such ire. Similarly, when you tell a person that you will “send an invite”, you are actually issuing two commands. Quite confusing!
I often wonder what we all do with the time we save by not saying taking the time to type that we will send the message again, or by saving the extra two syllables in the word “invitation”.
Of course I am bringing all of this up in a humorous way, since language is an always-evolving body of knowledge with broad influences. However, there is a social aspect to this. As you may already be aware, when communicating in person, subtle mirroring of various behaviors is very important to a successful interaction. The same can be true of the language we use. If the person with whom you are communicating uses the colloquialisms (such as resent and invite rather than send again and invitation), then perhaps we should flow along with that, regardless of our personal preferences. Of course, always be genuine and authentic when doing so, or you could be incorrectly perceived as condescending.
One of the keys to effective communication is to meet the other person “where they are”. Since we work with folks at all levels of the corporate and social spectrum, it is important for us to take the time to recognize and correctly echo the sentiment, as well as the tone of the communication to achieve a better dialogue with those we serve.
Now, if you will excuse me, I need to go send an invite.
The elderly population in the U.S has been on a steady incline for the past few decades. With more seniors living longer new challenges arise. Unfortunately, many seniors become vulnerable to different types of abuse, neglect, and exploitation as they age. The National Council on Aging estimates that financial fraud and abuse against seniors costs older Americans up to $36.5 billion each year.
The perpetrators of financial abuse can be anyone, such as family members, paid caregivers, or strangers who hack into systems and steal vital financial data. You must be well informed about financial fraud to know what to do about it and keep the seniors in your life safe.
Vulnerability and financial fraud
Financial exploitation can leave any target, such as businesses and individuals, with significant losses. However, when you combine this general risk with some of the cognitive deficits common to the elderly population, the result can be financial devastation. Risk factors that place seniors at a higher-than-average risk of becoming a victim include:
Needing assistance with activities of daily living.
Living with no spouse or partner.
Not using regulated social services.
Just a few short years ago, financial fraud had to be committed face-to-face with the senior, another family member, or banking institution. Today, attackers can sit in the comfort of their home and electronically attack funds in banking institutions, social security information, and other vital data that can unlock several accounts. These types of security incidents might not even be reported by the victims because they are often not required by law to report.
Importance of prevention
Recovery after financial abuse or exploitation can be nearly impossible. Taking steps to prevent it from ever happening is the best strategy to keep seniors safe. Here are a few strategies you can use:
Know the types of abuse
The underlying message around financial fraud is that you and any seniors you care for should never feel safe when it comes to their money. Types of financial fraud range from someone selling them services they don’t need to complex online identity theft. Here are a few of the types of fraud you should know about.
DDoS attacks happen when hackers take control of a company’s servers, networks, or devices. During a DDoS attack, the attacker can access vital information about hundreds or thousands of people. To protect seniors, be sure to assist them with choosing reputable companies when they do business.
Phishing happens when hackers send emails to a bank’s or other business’ customers that look legit. The email will usually ask the user to provide an account login, personal data, or a passwo
Welcome back to the next edition of “Hacking WordPress”. Find Part 1 if you missed it. Let me start with a PSA message. It is illegal to hack, log in to, penetrate, take over or even hack, a system or network of systems without the explicit permission of the owner. Criminal hacking is illegal and punishable under Federal Law. I am describing methods to learn more about WordPress so you can protect your sites better.
The Computer Fraud and Abuse Act of 1986, enacted into law today as United States Code Title 18 Section 1030, is the primary federal law governing cybercrime in the United States today. It has been used in such famous cases as the Morris Worm and in the prosecution of notorious TJX hacker Albert Gonzalez.
Stress testing your own Wordpress site with penetration testing
Now, in this edition we are going to use Kali Linux and WPScan to run a few commands against a WordPress site built in the lab for testing purposes. In the last episode I told you about Bitnami. They provide a fully virtualized version of WordPress in an .ovf format, which is ready to spin up with VMWare ESXi server. You can find the download here: https://bitnami.com/stacks
In this episode we are going to pen test a WordPress site for a couple of things. These will not give us access to the site but would be more around reconnaissance of the site. Recon will tell you a lot about a site and its security. Once you find out basic information, it’s easier to move on to deeper penetration efforts and possibly even breaching the site through a brute force attack.
How to find your Wordpress vulnerabilities
First you must prepare your instance of WPScan on Kali Linux to ensure you have the latest scan patterns, definition and updates to plug-ins and templates, as these updates will contain information about weaknesses and exploits within the assorted accessories that work with WordPress.
When you run the command below the output below that is what you should get in your Kali Linux terminal screen.
root@kali:~# wpscan --update
WordPress Security Scanner by the WPScan Team
Sponsored by Sucuri - [url=https://sucuri.net
[/url]; @_WPScan_, @ethicalhack3r, @erwan_lr, @_FireFart_
[i] Updating the Database ...
[i] Update completed.
This command runs a basic scan of the website, in this case the IP address. You can run this command with the FQDN if you prefer. I am running this with IP because it’s in the lab.
root@kali:~# wpscan --url 10.25.100.22
WordPress Security Scanner by the WPScan Team
Sponsored by Sucuri - [url=https://sucuri.net
[/url]; @_WPScan_, @ethicalhack3r, @erwan_lr, @_FireFart_
[+] URL: [url=http://10.25.100.22/
[/url];[+] Started: Tue Jun 25 23:59:58 2019
Currently, we’re in a period of growth for supply chain management. With the digital revolution bringing industry players around the globe closer together, business operations have expanded for companies big and small. As both business owners and consumers, we’re experiencing the changes every step of the way as well.
Each change brings with it a new set of challenges and benefits. This beckons in a new set of industry rules, and companies are left to learn them while keeping their operations running. If adopted and implemented poorly, this can stagger a business’s growth. That said, it’s necessary to understand these changes beforehand in order to weather them when they hit you.
The melding of physical mechanisms and digital systems
The average business owner is no stranger to physical threats to supply chain efficiency, especially if they’ve been around for a while. For instance, though it is one of the most conventional ways of moving goods, truck transportation can be very dangerous. Incidents on the road have long been problematic in the process of manufacturing and distributing products.
However, the progression of digital tools and the involvement of data has added another layer to the complex journey that brings products to consumers. For the most part, digital and databased technologies have made the supply chain system more efficient. While challenges exist, the pros outweigh the cons.
Big data insights can help entrepreneurs identify weaknesses in their supply chain, whether that be a lack of sufficient technology, staff, or organization. With this information, they can take real action to streamline the chain, enabling them to respond more flexibly to changes in the market.
Automation technology is an excellent example of advancements that can help business owners achieve greater efficiency. Companies like Amazon have already started incorporating automation into their practices. It would be wise to learn from their efficiency and apply it to your own operations.
Speaking of data, it’s important to be constantly evaluating your data security practices. A good data security plan takes specifics into mind — and with supply chains, there are a lot of seemingly minor specifics involved that could disrupt the entire flow of a business if mishandled. After all, when we talk about new technology in supply chain management, a lot of it has progressed due to the inclusion of data.
Data advancements are certainly what has brought in this new age of supply chain management, but failing to protect data can have disastrous consequences. As businesses exchange information with a variety of suppliers and vendors, new potential data security risks arise. When you factor in subcontractors and other players, it’s likely that companies aren’t even aware of all those involved in their own supply chains.
Maybe that last sentence sounds odd to you. How could companies not be aware of everyone involved in their business? With large-scale operations, however, there may not be apt time or labor to closely watch everything. Different parties who can take care of specific
Every week the AT&T Chief Security Office produces a series called ThreatTraq with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them; you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. The video features Jonathan Gonzalez, Principal Technical Security, John Hogoboom, Lead Technology Security, AT&T and Jim Clausing, Principal Member of Technical Staff, AT&T.
Jonathan: Twenty percent of the top 1,000 Docker images have at least one high vulnerability.
Jim: Jonathan, I understand you have a story on vulnerable Docker containers.
Jonathan: Yes, Jim. Thank you. Actually, I'm going back in time a little bit. Two months ago when I was last here, I brought up a story about Alpine Linux and the root account having an empty password. Well, it seems Jerry Gamblin from Kenna Security was inspired to try to figure out how many more there were. He started trying to figure out things like, "How do I scan a Docker image from Docker Hub?"
Around the same time, in May, a group from Japan made an open source application called Trivy which allows you to pull a Docker image from the hub or a private registry and actually scan, run, extract the contents of it and find out what vulnerabilities are running at the OS level or even in some applications. I think they are covering Node and NPM applications and Yarn, and others. The researcher was saying, "Perfect, the tool that I need to be able to run, to find out what's going on in these images." He ran this tool through, the ~ top 10,000 most pulled images in Docker and put the results out on the web. The website is vulnerablecontainers.org.
John: That might be a good thing if you're big in the Docker space and you're making your own containers and images that you use as part of your production process to identify if you have any vulnerabilities in a container that you're building or using.
Jonathan: One of them he mentioned on Twitter that is a little scary is Ruby on Rails, which is very popular. There was an image called Rails that was deprecated about two years ago. Two years' worth of vulnerabilities in the OS and everything else - and people are kinda still pulling from it. Docker officially moved it to a new image called Ruby. But if you aren’t aware that the name changed...
John: That’s confusing.
Jonathan: Correct. And kind of misleading, because you can get the latest tag and keep pulling the latest image, but if they haven't updated in two years...
John: And they moved it to a different name…
Jonathan: The researcher points out that there's no clear way for someone pulling the image to know that it's been deprecated unless you go to Docker Hub and see the description that says deprecated, right?
John: Right, right.
Jonathan: So hopefully, they're talking about putting something in the command line to tell you, "Hey, stop using this," "Rails is deprecated, grab the latest from Ruby."
The cloud certainly offers its advantages, yet as with any large-scale deployment, the cloud can offer some unforeseen challenges. The concept of the cloud just being “someone else’s data center” has always been a cringe moment for me because this assumes release of security responsibility since ‘someone else will take care of it’.
Yes, cloud systems, networks and applications are not physically located within your control, but security responsibility and risk mitigation are. Cloud infrastructure providers allow a great deal of control in terms of how you set up that environment, what you put in your environment, how you protect your data and how you monitor that environment. Managing risk throughout that environment and providing alignment with your existing security framework is what is most important.
Privacy and Risk
With GDPR and the “sister” policies in the U.S. as seen with Arizona, Colorado, California and others, organizations are faced with increased requirements when it comes to protecting data in the cloud. And it is not as simple as deploying Data loss prevention (DLP) in a data center since the data center has now become fragmented. You now have a bunch of services, systems and infrastructures that are no longer owned by you, but still require visibility and control.
Cloud services and infrastructures that share or exchange information also become difficult to manage: who owns the SLAs? Is there a single pane of glass that monitors everything? DevOps has forced corporations to go as far as implementing micro-segmentation and adjusting processes around firewall rule change management. Furthermore, serverless computing has provided organizations with a means to cut costs and speed productivity by allowing developers to run code without having to worry about infrastructures and platforms. Without having a handle on virtual private clouds and workload deployments, however, things can quickly spin out of control and you start to see data leaking from one environment just as you’ve achieved a comfortable level of security in another.
Several steps can be taken to help mitigate risk to an organization’s data in the cloud.
Design to align. First and foremost, align your cloud environment with cybersecurity frameworks. Often organizations move to the cloud so rapidly that the security controls historically applied to their on-premise data centers, which have evolved and hardened over time, do not migrate effectively, or map directly to the cloud. Furthermore, an organization may relax the security microscope on widely used SaaS applications. But even with these legitimate business applications, without the right visibility and control, data may end up being leaked. Aligning cloud provider technology with cybersecurity frameworks and business operating procedures provides for a highly secure, optimized and more productive implementation of a cloud platform, giving better results and a successful deployment. Moreover, being able to do this while implementing the cloud technology can help demonstrate measurable security improvement to the business by giving a “before” and “after” implementation picture.
Make yourself at home. Cloud systems and networks should be treated the way you treat your LAN and Data Center. Amazon’s Shared Responsibility Model, for example, outlines where Amazon’s security responsibility ends, and your security responsibility begins. While threats at the compute layer exist, as we’ve seen
Being proactive is the key to staying safe online, especially for businesses and organizations that operate websites and mobile applications. If you wait for threats to appear, then in most cases it is too late to defend against them. Many data breaches come about this way, with hackers uncovering security gaps that had gone previously undetected.
The average web developer wants to assume that their code and projects will always function in the intended manner. Reality is a lot messier than that and organizations need to expect the unexpected. For years, cybersecurity experts recommended a practice known as penetration testing (and still do), where internal users pose as hackers and look for exposed areas of servers, applications, and websites.
The next evolution of penetration testing is something that is known as Chaos Engineering. The theory is that the only way to keep online systems secure is by introducing random experiments to test overall stability. In this article, we'll dive more into Chaos Engineering and the ways it can be implemented effectively.
Origin of Chaos Engineering
The cloud computing movement has revolutionized the technology industry but also brought with it a larger degree of complexity. Gone are the days when companies would run a handful of Windows servers from their local office. Now organizations of all sizes are leveraging the power of the cloud by hosting their data, applications, and services in shared data centers.
Back in 2010, Netflix was one of the first businesses to build their entire product offering around a cloud-based infrastructure. They deployed their video streaming technology in data centers around the world in order to deliver content at a high speed and quality level. But what Netflix engineers realized was that they had little control over the back-end hardware they were using in the cloud. Thus, Chaos Engineering was born.
The first experiment that Netflix ran was called Chaos Monkey, and it had a simple purpose. The tool would randomly select a server node within the company's cloud platform and completely shut it down. The idea was to simulate the kind of random server failures that happen in real life. Netflix believed that the only way they could be prepared for hardware issues was to initiate some themselves.
It's important not to rush into the practice of Chaos Engineering. If your experiments are not properly designed and planned, then the results can be disastrous and little helpful knowledge will be gained. Best practice is to nominate a small group of IT staff to lead the activities.
Every chaos experiment should begin with a hypothesis, where the team questions what might happen if their cloud-based platform experienced an issue or outage. Then a test should be designed with as small of a scope as possible in order to still provide helpful analysis.
One area where companies often need to focus their chaos experiments is in relation to
So, what is malware analysis and why should I care?
With the commercialization of cybercrime, malware variations continue to increase at an alarming rate, and this is putting many a defender on their back foot. Malware analysis — the basis for understanding the inner workings and intentions of malicious programs — has grown into a complex mix of technologies in data science and human interpretation. This has made the cost of maintaining a malware analysis program generally out of reach for the average organization.
And, the era of “big data” that we’re currently in isn’t making things any easier. At AT&T Cybersecurity, for example, our AT&T Alien Labs threat intelligence unit analyzes a ton of threat data coming in from the AT&T IP network, our threat-sharing community of 100,000 security professionals (Open Threat Exchange, or OTX), and our global sensor network. To give you an idea of the scale, in a single day:
More than 200+ petabytes of traffic cross the AT&T network, including 100 billion probes for potential vulnerabilities
Open Threat Exchange (OTX) users publish around 47,000 contributions of threat data to the platform
Alien Labs collects twenty million threat observations and analyzes more than 370,000 malware samples and 400,000 suspicious URLS collected via our global sensor network
To get through all of this big data, Alien Labs uses multiple layers of analytics and machine learning, including a variety of malware analysis tools. With these tools, we can quickly perform threat artifact assessment (i.e. is this a false alarm or true threat), threat indicator extraction and expansion, behavioral analysis, malware clustering and more. Essentially, we’re filtering through the noise of big data so our threat researchers can more quickly validate, evaluate and interpret that information and turn it into the enriched, tactical threat intelligence that drives our approach to threat detection and response.
Malware analysis tools and techniques
As a broad overview (and I do mean broad), the various tools used for malware detection and analysis can be categorized into three categories: static analysis, dynamic analysis, and hybrid analysis.
Dynamic analysis involves running the malware sample and observing its behavior on a system in order to understand the infection and how to stop it from spreading into other systems. The system is setup in a closed, isolated virtual environment — a virtual machine or “sandbox.”
Cloud adoption continues to grow as more businesses discover the cost saving potential and convenience that comes with it. However, misconfigured servers are still a major risk for companies using infrastructure and platform as a service. Misconfigured servers are characterized by default accounts and passwords, unrestricted outbound access, enabled debugging functions, and more. The number of files exposed on misconfigured servers, storage and cloud services in 2019 is 2.3 billion according to an article on ZDNet.
However, not all businesses primarily use the cloud for file transfer and data storage. Some people still prefer using bulk USB drives because they do not require an internet connection, and can be physically protected. Apart from this, their use cannot be restricted for the owner, and they have been reducing in size yet their storage capacity has been increasing. However, USB’s could come from a vendor preloaded with malware that can infect everything they are plugged into.
You can protect your computer system
The greatest risk of USBs is that they are very small yet someone can use them to steal massive amounts of data and easily take that data anywhere. Some companies and organizations like the US military have responded to this risk by banning their use completely. To ensure employees or workers stick to this ba
The results are in, and once again AT&T’s Cybersecurity is recognized as an industry leader by securing its third consecutive ranking of “very strong” in Global Data’s annual product report.
AT&T is the only company to achieve this hat-trick rating in all of Global Data’s seven categories of assessment. AT&T’s bold acquisition of AlienVault has reaffirmed its position as the cybersecurity leader with both competitive and qualitative edges.
AT&T provides a robust end-to-end portfolio that includes a 360 degree solution for its customers’ pain points, such as: advanced threat intelligence, highly secure endpoints, mobile threat defense, compliance and risk management, and highly secure infrastructure. AT&T’s Cybersecurity consulting and managed solutions continue to impress and achieve results for both small business and enterprise alike.
Following the synergetic merger between AT&T’s Cybersecurity Consulting and Managed Security Services with AlienVault late this winter, today, the new and formidable AT&T Cybersecurity integrates the best-of-breed technologies through AlienVault’s Unified Security Management platform with its own unparalleled consulting services, network visibility, reliability, and curated threat intelligence.
Global Data’s assessment is one we can bank on. Through its independent review process involving in-depth analysis, media reviews, independent consultations, and input by external thought leaders, its annual product assessment report is one that business and technology leaders have come to rely on as impartial and accurate. Its annual product assessment report was recently published for Global Managed Security Services (MSS). Read the full report.
Remember when you were younger, and you wanted to do something that all your friends were doing, yet you knew your parents would never approve? Perhaps it was skating in that home-made “Half-Pipe”, or that time you wanted to try some equally dangerous stunt?
Of course, your parents disapproved, to which you probably responded with the time-honored refrain: “But everyone is doing it!” That was never a convincing argument. This probably added to the thrill, so you did it anyway (and you have the scars to prove it).
Do you ever wonder what would have happened if you were the first person to build that half-pipe? Would the parental support have been different? Would being the first be a special accomplishment? Would others follow?
As with many “firsts”, sometimes it is better to be second, third, or even fourth, as the development of a new idea may sometimes be too new, or too daring. Sometimes, it just needs the time to mature (remember MySpace, and its predecessor, Friendster?)
Here’s a “near-first” that you can try that poses no risks, and can increase your account security. Be the first to extend your password beyond the required minimum.
It has been over two full years since The National Institute of Standards and Technology (NIST) announced the replacement of the old password standards. If you don’t recall, some of the sweeping changes included:
Allow at least 64 characters in length to support the use of passphrases. Encourage users to make memorized secrets as lengthy as they want, using any characters they like (including spaces), thus aiding memorization.
Do not impose other composition rules (e.g. mixtures of different character types) on memorized secrets.
Do not require that memorized secrets be changed arbitrarily (e.g., periodically) unless there is a user request or evidence of authenticator compromise.
Yes, NIST doesn’t even call them passwords anymore – they are “memorized secrets”.
Two years later, and sadly, many folks I know are still conforming to the same old password rules. Why are we not bold enough to take the lead on this? Your organization may require a minimum of 8 characters, but they probably do not restrict you to 8 characters. Are you still using the minimum character length?
Now is the time to break free of the old password model and increase the length of your memorized secret. It’s a password-volution! The benefit that this serves is that when your organization finally adopts the NIST passphrase recommendation, you will be ahead of the curve. Raise your Latte in triumph, you non-conformist hipster!
In all seriousness though, this new approach of long passphrases is going to happen quicker than you may imagine. It is always better to become accustomed to a change before it is mandatory. Until the full NIST recommendation is adopted, we would still need to add uppercase and special characters to satisfy the old rule set, but that is always easy when a passphrase is being used. #EasyWhenUsingAPassphrase! See what I mean? That is a strong memorized secret!
Try it out, and be one of the first in your organization to adopt the new approach. Get into this habit before everyone is doing it.
Here’s wishing you the best for your password future.
Every week the AT&T Chief Security Office produces a series called ThreatTraq with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them; you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. The video features Michael Stair, Lead Member of Technical Staff, AT&T, Matt Keyser, Principal Member of Technical Staff, and Manny Ortiz, Director Technology Security, AT&T.
Michael: A flaw in Exim is leaving millions of Linux servers vulnerable.
Matt: Hey, Mike. I heard there was a pretty serious flaw affecting Exim email servers. What can you tell us about it?
Michael: Yes, attackers are exploiting a pretty critical flaw in the popular Linux Exim mail transport agents, MTA, allowing for remote command execution. Exim is an SMTP mail relay. It's pretty popular, and runs a large percentage of internet mail servers. It's the default MTA on some Linux systems. From a recent Shodan scan, it could affect up to three-and-a-half million vulnerable servers.
The bug itself was tracked it down to improper validation in some of the recipient addresses. One of the functions was given a 9.8 out of 10 on the CVSS v3 scale. It affects versions 4.87 to 4.98, but I think the latest version 4.92 is unaffected.
Matt: So it's a big bug. And it is a remote code execution (RCE) bug, which is one of the most critical types you could possibly have.
Michael: They do have patches out. They're porting patches to all versions, back to 4.87, if you're using an older version. So just make sure you're patching and making sure you're up to date with the most recent version because it's a pretty serious issue.
Matt: It sounds like it's something you could just address the email to somebody and you just drop an exploit in there and it's remote code execution?
Michael: Yeah, it seems like it's pretty simple to exploit. And there’s actually worm that's exploiting this and finding new systems.
Manny: From what I understand, you can actually put a command that eventually the server will run, but from what I understand, the server may take seven days before it actually activates the exploit. It appears there's some sort of timeout that happens after seven days when the email is determined to have an invalid mail address, and then the server runs the actual command.
Matt: But that means I could hand-type the exploit code. Is that roughly correct or is it something you'd have to craft or a little more difficult to do?
Manny: Right. The example I saw was just a simple command where it went and did a get to an actual external IP address.
Matt: So you're getting a shell.
Manny: Yes. Or you can have the box basically go run some code offline or off net, so it basically gives you an open command line to run whatever you want on the box.
Matt: So it's totally possible that your box has been exploited and you won't know for seven days?
Matt: That's a scary thought, right?
Manny: The sky is the limit when it comes to a bad actor that wants to take advantage of this vulnerability. They can come up with anything they want to. If they want to mine cryptocurrency, they can. If they want to set the server up to do DDoS attacks, they can. I think, Mike, you said that there is a patch f
As I talk to organizations in the AT&T Executive Briefing Center and learn more about the different types of business and enterprise security goals, one of the resonating themes across different industry verticals today is Digital Trust. The goal is to build trust in the system between the consumers of your services and the enterprise. To achieve this goal, it is about going to the foundational aspects of information protection. It is about building the measures that help enterprises build confidence (of consumers, employees, customers etc.) while increasing the adoption of new digital channels.
What is digital trust?
Digital Trust is a concept that refers to the level of confidence that customers, business partners and employees have in a company or organization's ability to maintain secure networks, systems and infrastructures, especially with regard to their sensitive data. As more and more data breaches have been reported in the news, the concept of Digital Trust has become a mainstream concept for virtually all stakeholders on the world wide web. Digital Trust is a "make-or-break" issue, not a "nice to have".
Organizations are now viewing digital transformation initiatives with a lens of digital trust while managing an ever-widening list of priorities to address risk exposure, regulatory and compliance requirements - all with a leaner IT/Security team.
As organizations work to build customer-focused, digital business models, it’s critical to consider the role of trust and privacy in the customer journey. Delivering digital trust isn’t a matter of propping up a highly secure website or app, or avoiding a costly, embarrassing data breach. It is about creating a digital experience that exceeds customer expectations, allows frictionless access to goods and services, and helps protect customers’ right to privacy while using the data they share to create a customized and valuable experience. Today’s security strategies are, in large part, still responding to yesterday’s challenges. From reports of exposed personal information to data misuse, trust incidents are becoming increasingly visible to the public.
What are the key attributes to a trust-focused organization?
Cyber risk is recognized as business risk. Business leaders should actively support the need for persistent visibility into digital customer behavior online, even as the cybersecurity team works to strengthen safeguards against threat actors and data privacy risks.
Visibility is valued. User experience should be as pleasant and streamlined as possible for customers. Trust should feel virtually seamless for customers. Barriers should only appear to suspected threat actors. Data analytics solutions can provide visibility into a customer’s movements across digital platforms and identify risks by comparing near real-time data to a baseline of known threats. When an abnormal pattern of customer logins, transactions or behavior is identified, the system should automate an immediate response to further authenticate users or isolate risks.
Design thinking. The process of delivering digital trust is about more than security and technology, it’s a shift in mindset that places the customer experience at the center of digital transformation. Secure code and processes with security as an active consideration, rather than an after-thought are critically important to success. Baked-in security offers greater assurance against risks and creates an easier digital experience across channels.
Empathy is at the core of trust delivery. Digital trust is a moving target, like any other strategic business goal. Your organization can’t rely on stagnant strategies to grow profitability or address risks. To bui
A common discussion in the security industry is how to improve the effectiveness of detection and prevention systems. You can find tons of documentation and books about: The Defender's Dilemma, Blue Team vs Red Team, A Comprehensive Security Approach, among others. However, in any organization, it is very important to move beyond theory and implement specific solutions to detect security attacks and security threats.
In this post, I want to share some thoughts about one specific topic: Network Intrusion Detection Systems (NIDS), specifically a really good piece of software called Suricata.
Let's start with some background. Intrusion detection is a broad concept that refers to some type of mechanism or process to identify security threats. Organizations typically use solutions like Host Intrusion Detection Systems (HIDS) and Network Intrusion Detection Systems. In addition, response capabilities are also quite popular in Intrusion Detection Systems (IDS). In fact, several vendors offer Endpoint Detection and Response (EDR) and some vendors are using a new acronym: Network Detection and Response (NDR). I think the industry is trying to be more consistent by adding the word “response” in both endpoint detection and network detection.
In NIDS, there are two main approaches: signature-based detection and anomaly-based detection.
A signature-based intrusion detection system operates in real time capturing traffic and looking for signature matches. If a match is found, the system will generate an alarm.
An anomaly-based system is looking for abnormal behavior that represents threats. Instead of being concentrated on the packet, it looks for unusual behavior, anomalies and deviations from normal.
Having said that, let’s switch gears to the main topic of this post which is Suricata.
“Suricata is a high-performance Network IDS, IPS, and Network Security Monitoring engine. It is open source and owned by a community-run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed by the OISF” .
Besides the official definition, I think Suricata is a very powerful open source NIDS. It is a signature-based IDS and once it is properly configured, Suricata is capable of doing real-time traffic inspection in order to trigger alarms when suspicious activity is detected in your environment. Suricata also offers a very extensive list of features. The complete list can be found here:
From that list, I would like to highlight an important one: Threading .
Suricata is capable of running multiple threads. If you have hardware with multiple CPUs/cores, the tool can be configured to distribute the workload on several processes at the same time. You can start running with a single thread and process packets one at a time. Nevertheless, from my experience, multi-threading is a much better configuration and the way to improve Suricata’s performance.
Suricata has four thread modules:
Packet acquisition: responsible for reading packets from the network.
Decode and stream application layer: decodes the packets and inspects the application.
Detection: compares signatures and can be run in multiple threads.
Outputs: in this module, all the alarms are processed.
Organizations usually focus on cyber threats which are external in origin. These include anti-malware, external firewalls, DDoS attack mitigation, external data loss prevention, and the list goes on. That's great, external cyber attacks are very common so it's vital to protect your networks from unauthorized access and malicious penetration. The internet and unauthorized physical access to your facilities will always be risks and they must be monitored and managed. But it’s easy to lose sight of an often overlooked cyber attack surface, and that’s the one on the inside. Internal cyber attacks are more common than many people assume, and ignoring that reality would be at your peril. Here’s why you should be prepared for internal cyber threats, and what you can do about it.
The impact and importance of insider attacks
Insider threats to your network typically involve people who work as employees or contractors of your company. They belong in your facilities and they often have user accounts in your networks. They know things about your organization that outsiders usually don't–the name of your network administrator, which specific applications you use, what sort of network configuration you have, which vendors you work with. External cyber attackers usually need to fingerprint your network, research information about your organization, socially engineer sensitive data from your employees, acquire malicious access to any user account, even those with the least amount of privileges. So internal attackers already have advantages that external attackers lack.
Also, some insider threats aren’t from malicious actors. Some insider threats are purely accidental. Maybe an employee will accidentally leave a USB thumb drive full of sensitive documents in a restaurant’s washroom, or click on a malicious hyperlink that introduces web malware to your network. According to Ponemon Institute’s April 2018 Cost of Insider Threats study, insider threat incidents cost the 159 organizations they surveyed an average of $8.76 million in a year. Malicious insider threats are more expensive than accidental insider threats. Incidents caused by negligent employees or contractors cost an average of $283,281 each, whereas malicious insider credential theft costs an average of $648,845 per incident. But the bottom line is that all of these incidents are very expensive and they must be prevented.
Comparing insider versus outsider threats and attacks
So insider threats can be a lot more dangerous than outsider threats. As far as malicious attackers are concerned, insiders already have authorized access to your buildings and user accounts. An outside attacker needs to work to find an external attack vector into your networks and physical facilities. Those are steps inside attackers can usually skip. It's a lot easier to privilege escalate from a user account you already have than to break into any user account in the first place. A security guard will scrutinize an unfamiliar individual, whereas they will wave hello at a known employee.
The same applies to accidental incidents. I don’t know any sensitive information about companies that I’ve never worked for. A current or former employee often will, and it may be socially engineered out of them.
Because of the privileged access that insiders already have, they can be a lot more difficult to detect and stop than outsider threats. When an employee is working with sensitive data, it’s very difficult to know whether they are doing something malicious or not. If an insider behaves maliciously within your network, they can claim it was an honest mistake and therefore it can be challenging to prove guilt. Insider threats can be a lot more
When analyzing malware and adversary activity in Windows environments, DLL injection techniques are commonly used, and there are plenty of resources on how to detect these activities.
When it comes to Linux, this is less commonly seen in the wild.
I recently came across a great blog from TrustedSec that describes a few techniques and tools that can be used to do library injection in Linux. In this blog post, we are going to review some of those techniques and focus on how we can hunt for them using Osquery.
LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object.
We can utilize the ldd tool to inspect the shared libraries that are loaded into a process. If we execute the sample-target binary with ldd we can see that information.
Linux-vdso.so.1, is a virtual dynamic shared object that the kernel automatically maps into the address space in every process. Depending on the architecture, it can have other names.
Libc.so.6 is one of the dynamic libraries that the sample-target requires to run, and ld-linux.so.2 is in charge of finding and loading the shared libraries. We can see how this is defined in the sample-target ELF file by using readelf.
Now, let’s set the LD_PRELOAD environment variable to load our library by executing.
We can see our sample-library being loaded now. We can also get more verbose information by setting the LD_DEBUG environment variable.
A simple way to hunt for malicious LD_PRELOAD usage with Osquery is by querying the process_envs table and looking for processes with the LD_PRELOAD environment variable set.
SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD&
Every week, the AT&T Chief Security Office produces a set of videos with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them, and you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. The video features Joe Harten, Director Technology Security, AT&T, Jim Clausing, Principal Member of Technical Staff, AT&T and Stan Nurilov, Lead Member of Technical Staff, AT&T.
Joe: It looks like even ransomware authors can go into early retirement.
Jim: So, Joe, I understand you have a story about it - some more and more authors that are retiring.
Joe: Yes, exactly. I picked this up from Threatpost. Kind of an interesting angle we don’t talk about much. But on the dark web, some researchers picked up on the authors of the GandCrab ransomware issuing a statement that they're retiring, that they're shutting down their infrastructure and they're not going to do any more decryptions and that the GandCrab ransomware is no longer operating. As of June 1st, they shut it down after a little over a year. It had started in January of 2018. So GandCrab is a pretty prominent ransomware. It does standard ransomware - with encrypted files getting a .GDCB file extension. So that's where GandCrab comes from. Available in a host of vectors, including spam, fake software downloads, exploit kits and social engineering targeted ransomware.
The dark web post basically said the authors claim to have made $2 billion, which they equate to approximately $2.5 million per week. So between the ransomware as a service and the fees paid directly to the ransomware operators, 2 billion in about 18 months. From this point forward, they issued a warning. No further decryptions. If you purchase the ransomware now, meaning you operate it, you're not going to get files back for any future victims.
This is kind of the other end of the spectrum. This is the malicious actors' view of their posts to the dark web saying, "You know, we're done. We've washed all our money, we've made a huge bounty and we're getting out of the business."
I just thought it was interesting. You know, we are always looking at from how to protect yourself from ransomware. But it’s interesting to have a glimpse into what it's like to be somebody who is cashing the checks for these things. So I don't know, what do you think Stan or Jim?
Jim: I'm hopeful that law enforcement will catch these guys and bring them to justice.
Joe: Yeah, I agree. I mean with this level of, kind of, braggadocios mentality, posting on the dark web - you hope there's some investigator who's in there somewhere, you know, purporting to be one of their buddies could actually be in law enforcement and maybe they'll come to justice. But that's not the way the story is told right now.
Stan: It almost reminded me of another malware author who rolled Mirai, who did something similar. The creator of the Mirai source code I believe just put it out there and made this big statement of some sort and said, "You'll never catch me," or something like that. And then a few months later, he was caught by, I believe the FBI, or for certain, law enforcemen
Terry Sweeney: Welcome back to the Dark Reading News Desk. We’re here at the RSA Conference in San Francisco. I’m Terry Sweeney, contributing editor at Dark Reading and I’m delighted today to be joined by Sanjay Ramnath, vice president of product marketing at AT&T Cybersecurity. Sanjay, thanks so much for joining us today.
Sanjay Ramnath: Thanks so much for having me.
Terry Sweeney: This trend of SOAR, security orchestration automation and response is generating lots of buzz both here at RSA and among InfoSec professionals as well. Kick us off by explaining what SOAR is and how the companies that use it benefit from it.
Sanjay Ramnath: SOAR is a term that was coined by Gartner. SOAR is really a collection of technologies and processes that aim to solve three problems.
I think the first problem that the SOAR framework aims to solve is: How do you stay ahead of this constantly evolving threat landscape? How do you stay ahead of a rapidly changing network while the modern attack surface continues to expand and network parameters vanish? You have hybrid environments with on-premises and cloud assets. So one of the core tenants of SOAR is aggregating data, aggregating both threat data and intelligence and network visibility on a single platform so all the downstream operational decisions around security can be fed with this stream of intelligence and data.
The second problem that SOAR addresses is complexity in the security ecosystem and infrastructure itself. When you have a really large number of point solutions and products that protect specific threat vectors you have two issues. One is you have a management problem: how do you constantly switch contexts across these different solutions? You also have a problem of too much data and what is called alert fatigue. The SOAR approach attempts to solve this by automating some of the more mundane resource intensive, human intensive, tasks like data analysis and correlation so the security operations teams can be a lot more effective and they don’t get distracted by the noise. They actually focus on what’s important.
The third thing that SOAR addresses is incident response. What do you do when an incident happens? What do you do when your network is intruded upon? Do you have the right processes? Do you have the right workflows in place? Do you have the right data for investigations? SOAR brings all of these together. So SOAR is not a single technology or a single product, it’s really a concept or a framework that brings detection, automation, response, orchestration, intelligence and all of that all together under a common set of terminologies.
Terry Sweeney: That’s really helpful and I’m glad you mention automation. It seems like given the volumes of information that have to be analyzed; this is an essential piece of SOAR. Talk a bit more about why it’s critical to have in combating today’s security issues.
Sanjay Ramnath: You’re never going to have enough resources, bandwidth, and skills in security to stay ahead of the cyber criminals and threat landscape. So I think applying automation where it makes sense really helps streamline security operation. As I mentioned earlier, applying automation in terms of taking this really vast amount of data, threat data and converting that into actionable, tactical threat intell
Cybercrime is costing UK businesses billions each and every year.
Small businesses in particular are under threat, as they often take a more relaxed approach and a ‘not much to steal’ mindset. However, this lack of diligence has caused many companies to close permanently.
Let’s ensure yours isn’t one of them.
Time to start making the issue a priority!
Here are some practical security recommendations for you and your business.
Monitor and identify possible threats
First things first, you need to analyse how secure your systems are. Take a proactive rather than reactive approach to cybercrime.
Do a thorough risk assessment, analysing all areas of your business, paying close attention to any weak spots. Instead of waiting for an attack to happen and taking the necessary actions; reduce the chance of risks completely.
Being aware of all the latest cyber threats (from phishing to hacking, there are many out there, constantly evolving and taking on new forms)
Keeping your operating systems up to date
Backing up data
Protecting all software
Using an effective password policy
Remember: this isn’t a one-off concern, but an ongoing issue. So, ensure cyber crime is a priority and keep monitoring all potential threats.
Educate your employees
Whether you’re a team of two or one hundred, every employee needs to be educated on the steps you’re taking to mitigate against cybercrime.
Bear in mind, this includes anyone who works from home. Ensure all laptops or tablets have the necessary endpoint security software.
This also includes any third parties or contractors who have access to any files on your system.
Dedicate at least one employee to being responsible for the issue: keeping everyone informed and taking the required actions to improve security posture.
Consider All Lines of Defence
A firewall is often the first line of defence in protecting you against attacks. These can be both internal and external. Employees should consider installing one on their home computers, for example.
However, this isn’t the only line of defence to consider.
Ask yourself questions such as:
Is your password policy robust?
Do you have the necessary cybersecurity insurance in place?
Do you have a record of everyone with administrative privileges?
Is your customer data safe?
How would your business cope in a temporary downtime period?
Every week, the AT&T Chief Security Office produces a set of videos with helpful information and news commentary for InfoSec practitioners and researchers. I really enjoy them, and you can subscribe to the Youtube channel to stay updated. This is a transcript of a recent feature on ThreatTraq. Watch the video here. The video features Jaime Blasco, VP and Chief Scientist, AT&T Cybersecurity, Alien Labs, Brian Rexroad, VP, Security Platforms, AT&T, and Matt Keyser, Principal Technology Security, AT&T.
Jaime: Today we are going to talk about how machine learning is being applied in cybersecurity. We will also be discussing how data science can be used to improve threat analysis and threat detection.
Brian: All right, Jaime. Based on this discussion that we already had, maybe you can take us into a little deeper on how you are working with, you know, data science and machine learning in the area of threat detection and threat analysis.
Jaime: Absolutely. So one of the things that I want to start with is clarifying some misconceptions. In the cybersecurity industry, you're seeing many players talking about using AI and machine learning. Those two words you're going to see people using them in the same context but I wanted to clarify a little bit about what that means. For me, artificial intelligence is more the broad field and within artificial intelligence, we can talk about general artificial intelligence and narrow artificial intelligence. General artificial intelligence is something that doesn't exist yet. Right. We haven't been able to create an artificial intelligence that is able to generalize and reason as well as or better than humans. So, when we talk about narrow AI,..that's what machine learning is. It uses model that are able to solve a particular, really well defined problem.
Matt: Right now, we have a very narrow definition of functional artificial intelligence. And machine learning is one version of that, one technique that might be used to teach a machine how to solve a problem.
Brian: You know what, I think what the next stage that we need to get to is using artificial intelligence to figure out how to apply artificial intelligence. I mean, quite frankly...that's where it has to be and it's going to continue to be iterative to get deeper and deeper,.
Jaime: I totally agree. If you see some of the latest research from Google and others, the field of AutoML, is really popular with a lot of investments happening. For those of you that don't know what AutoML is, as Brian said, it's basically training a neural network to come up with new neural networks or novel architectures.
Brian: That will be the path to singularity in my opinion.
Jaime: So we can divide machine-learning techniques mainly in two categories: supervised machine learning and unsupervised machine learning. There’s a third one, reinforcement learning that we are not going to talk about today because I still haven't seen many use cases within cybersecurity. We talk about unsupervised machine learning in the area of anomaly detection or data exploration. And a point that I want to make there is we have many cyber security products out there that are applying unsupervised learning, including clustering, anomaly detection, etc. I'm not a huge fan of those algorithms in the cybersecurity context because they are prone to many false positives.
Matt: Things that are just clustering and finding things that are similar won't necessarily find you something malicious. That's when you need to apply a
AT&T Cybersecurity had a big presence at Infosecurity Europe 2019 in London, June 4-6. Our theme was unifying security management with people, process and technologies. While the industry is generally moving in the right direction, IT teams still struggle with being overwhelmed on the technology side, not knowing where to begin on the process side, and finding (or being able to afford) people with the right security skill sets.
In addition, network infrastructure managers tend to be disconnected from the CISOs even though they both might sit under the CIO. This leads to security often being an afterthought with new technology initiatives, rather than a core requirement. As Marcus Bragg, VP Sales & Marketing at AT&T Cybersecurity said in the booth, "it's like buying a new car without an airbag and asking the manufacturer to put one in months later." It's the difference between catching issues at the planning stage versus having to remediate them later, possibly after an attack has occurred.
Our team at the InfoSec conference enjoyed engaging with our visitors and exploring ways that AT&T Cybersecurity can help them solve their security challenges.
Here’s a picture of the AT&T Cybersecurity booth:
And, Chris Doman from our Alien Labs security research team gave a talk about threat-sharing and resilience to a packed house.
Other notable observations in terms of industry trends from Infosecurity 2019:
Way less Blockchain-oriented exhibitors than last year. Perhaps those companies went bankrupt when the coins crashed?
Machine Learning (ML) continues to be a hot topic - but more from vendors who use ML as one of many approaches to data analysis and fewer "pure ML" startups.
Lots of vendors I hadn't heard of that seem to target niche industries
Many companies offering Security Orchestration, Automation and Response (SOAR) types of solutions.
Lots of security training companies were at the conference, which makes sense given the cybersecurity skills shortage.
You can’t fix the flaws you don’t know about – and the clearer your sense of your organization’s overall security posture, the better equipped you are to improve it. Vulnerability assessments are a core requirement for IT security, and conducting them on a regular basis can help you stay one step ahead of the bad guys.
Ultimately, a vulnerability assessment helps you shift from a reactive cybersecurity approach to a proactive one, with an increased awareness of the cyber risks your organization faces and an ability to prioritize the flaws that need the most attention. With a diagnosis of your digital health, vulnerability scanning can provide a digital footprint and a precise picture of the threat landscape by applying a grade to each vulnerability to help your IT team prioritize and create risk treatment plans by focusing on the biggest opportunities first.
Any company can be exposed to the exploitation of their vulnerabilities; no one can claim to be 100% protected. But, without insight into those vulnerabilities and their effect on your organization’s business operations, remediation plans can’t be put into motion. While conducting your own vulnerability scanning in-house may be attractive for companies, it’s hard to beat the expertise of a third party security provider.
For some organizations, it may be more effective to keep all testing in house due to the understanding of the detailed environment and systems being accessed. On the other hand, for most small- and medium-sized businesses, it is difficult to maintain the level of expertise in-house that a third party provider can offer.
Requirements to properly assess vulnerability scanning results will depend on the company and its mission, and the requisite technical skills and work experience may be hard to come by. An in-house security assessment team may lack specialization, and it’s almost impossible to find well-rounded professionals who know networks, applications, mobility and cloud inside and out and are able to provide recommendations in all areas. Additionally, some compliance regulations require testing to be performed by accredited security professionals and certifying an internal team will come at an additional cost. Regardless of company size or size and expertise of the security team, there are inherent benefits to getting a fresh perspective on your systems and vulnerabilities. A purely internal team that is used to the “status quo” might miss something important.
Getting the maximum benefit from your vulnerability assessment involves adding context: tying the results to business impact through a comprehensive analysis of your company’s goals and vision and then applying that understanding to the outcome. The visibility into your security posture that vulnerability scanning services can provide isinvaluable. Whether there is a change to your organization’s environment, the need to prove security compliance, an initiative to transition to the cloud, or the need to handle proprietary customer information, ongoing scans can paint a picture of your security maturity and provide actionable insights for allocating resources and valuable time.
AT&T Cybersecurity offers vulnerability scanning services to meet a variety of needs. Here’s a short video where you can learn more.
Cybersecurity professionals know what they’re up against.
The type, number and severity of cyberattacks grows with time. Hackers display no shortage of cunning and ingenuity in exploiting security vulnerabilities, compromising important data and inflicting damage to both individuals and organizations.
Cybersecurity professionals also know that their defenses must evolve along with the attacks, requiring them to display even more ingenuity than hackers when creating security tools. They also need to pile those tools on top of one another in order (depth in defense) to make life as difficult as possible for hackers.
One such security precaution is the issuance of transport layer security (TLS) certificates by trusted Certificate Authorities (CAs). While the main purpose of TLS pinning is identity assurance, TLS also provides confidentiality and integrity of data using PKI, which can improve assurance of the identity of the endpoint. After verifying the website server’s identity, the certificates create encrypted channels of communication between that server and visitors.
Unsurprisingly, hackers have devised workarounds to these certificates, even going as far as buying and selling forged TLS certificates on the dark web. The mere existence of a TLS certificate is no longer enough to guarantee secure internet communication between web servers and clients.
To stay ahead of hackers, the arms race continues.
One such additional measure is known as TLS pinning, which offers an additional layer of security that meshes nicely with what the certificate issuance system already does.
Given the growing severity of cyberattacks on mobile devices and platforms, here’s what TLS pinning means for mobile users and how it affects the downloading of new mobile apps.
What TLS Certificates do and How They Work
TLS certificates work through the “magic” of public key encryption.
The central principle behind public key encryption is that two parties, A and B, who wish to send messages to one another without any third party, C, reading their messages can best do so if each has both a public and a private key that they can use to encrypt and decrypt messages.
The public key encryption process allows A to craft a message for B and use their public key — which is available to the public — to turn that message into encrypted gibberish. The only thing that will be able to turn the gibberish back into the original message is B’s private key, which only B has access to.
As long as B doesn't lose their private key and keeps others from stealing it, it won’t matter if C is able to intercept and read A’s message to B. It will be unreadable to anyone but B. The same is true for any message that B sends to A. B encrypts their message with their public key and only A’s private key will be able to decrypt it.
HTTPS is the TLS Highway
TLS certificates allow web servers to securely communicate with clients protected by public key encryption. Hypertext Transfer Protocol (HTTP) is the standard communication protocol on the internet and Hypertext Transfer Protocol Secure (HTTPS) is the version that uses public key encryption. In HTTPS, communication is secured through a
Healthcare breaches continue to be featured in the news. Hospitals continue to be ideal targets for hacking and other cybersecurity threats. This is evidenced by the increasing number of cyber attacks, including sophisticated ransomware attacks on hospitals. Many hospitals are beefing up their technologies and infrastructure to address the threat of cyber attacks. But they are neglecting a major weak link in data security: the clinicians.
Getting the clinicians on board
Although doctors generally understand the importance of cybersecurity, they are usually reluctant to take the extra precautions needed to secure patient data against cyber threats. This reluctance is probably partly because they believe that such efforts may interfere with patient care. In life or death situations, where every second can count, having a difficult process for doctors to authenticate to get to patient records may cost lives.
Without physician engagement, however, efforts to prevent hacking and other cybersecurity threats cannot succeed. To prevent patient data, including social security numbers, address info, andinsurance and Medicaid data from getting into the hands of the bad guys, hospital cybersecurity experts need to engage withthe hospital staff.
Designing convenient security systems
The issues clinicians have with data security protocol compliance are compounded by the need to make decisions fast to save lives. Doctors usually need quick access to information to make life and death decisions. Requiring doctors to go through a number of authentication layers can slow down treatment. Hospital staff will sometimes try to bypass these tough security measures, leaving patient data at the risk of being compromised. It is therefore important to make data security systems as convenient as possible to the clinicians. Some hospitals are attempting to increase convenience by providing mobile devices and replacing traditional patient identification systems with biometric systems.
Communicating with clinicians
One reason there are conflicts between cybersecurity experts and clinicians is communication breakdown. Doctors may not understand why certain security measures have to be taken. This is especially the case when there is a new security issue or immediately after a data breach. Cybersecurity experts can get good results by selling the security measures as a patient safety intervention as opposed to explaining them away as administrative issues. IT experts can share data that shows the impact of cybersecurity threats on patient outcomes. For example, a study by Cornell University academics found that data breaches increase a hospital’s 30-day mortality rate. Cybersecurity experts can use data from such studies to appeal to the clinicians' life-saving instincts and to show them the need to be data security-conscious.
Efforts by the healthcare community to reduce the tide of cybersecurity attacks against hospitals have largely been unsuccessful. Experts believe that this failure is partly due to sidelining clinicians when designing hospital security systems. When cybersecurity experts increase the convenience of the security systems and they properly explain how such systems can impact patient outcomes, there are more likely to succeed in getting hospital staff to do their part in prot
Have you noticed that people are just too busy to read important information you send to them? One of the problems with disseminating information, especially when it is about cybersecurity, is that there needs to be a balance between timing, priority, and cadence.
Timing is simply when the message is sent.
You may send a message of the utmost urgency, such as a warning about a ransomware outbreak. However, if you sent that message at 3AM, it will probably be ignored amidst all the other E-mails that arrived overnight in the recipient’s in box.
Priority is the importance of the message.
Yes, you can flag a message as high importance, or some similar setting in your mail client, however, your priorities are not necessarily the same as the recipients’, so your important message may not generate any heightened interest.
Cadence is the frequency of your messages.
Do you send too many messages? If you do, you run the risk of the “boy who cried wolf” problem, where people will just ignore most, if not all, of your messages.
What can you do to get someone to read the message, or at least retain the most important part of the message? Sure, you could write a single line message, but that would offer no context.
I recently ran into a problem when I needed to send a message warning of a voicemail phishing scam. I needed high engagement, yet I had previously sent another message about another security event, so my cadence was too tight, and my frequency too close. How could I engage the recipients to notice this message above the other?
Here is how I used it to grab the readers’ attention. First, I sent the message that many people may not have entirely focused on:
If you are a total grammar, (or typo) geek, you may notice the error I made in the sentence:
We do not use any system that requests a network password to retrieve a voice
message from and external site.
Once this message settled in, (or became buried beneath the recipients’ other priorities), I followed it with this message:
Using this deliberate error, and conceding to the error, the reader is not only drawn to the most important idea in the message, but the reader may actually go back to look more closely at the original message, which offers a better chance of the recipient internalizing the message.
Of course, the nature of this technique could be perceived as manipulative, however, no one was harmed through its use. Also, it certainly cannot be used too often. Like all good tools, its effectiveness becomes dulled with overuse. Again, this is also part of the balance of social engineering skills, and if you have not already read Chris Hadnagy’s book, it is highly recommended. He can teach you how to use, yet not abuse, some of the best techniques in the social engineering profession to excellent effect.
If used judiciously, concession is a powerful tool to engage a population suffering from information-overload. Tread lightly!
Part of our blog series “How to prevent a WordPress site hack in 2019: lessons from a former hacker”
Hello all and welcome to the first episode of a new blog series focused on how to prevent WordPress site hacks.
In this first post of the series, I will provide videos and articles that will comprise a set of tutorials to show you the ins and outs of building a home lab that will give you the flexibility to test, hack, or learn just about anything in IT.
Personal or home labs can be very subjective because, I know people in the industry who have spent thousands of dollars building out personal labs with the latest hardware and software in the industry. I tend to take a bit more of a minimalist approach to building out my personal lab. Of course, if you work for a manufacturer of a certain technology and they provide you with that technology then there is really no excuse for not having a great lab around said tech.
How to build your home lab on a budget
What I am going to show you in this article will range in price from free to a few hundred dollars, which for most people is acceptable to spend on a personal lab. To perform the upcoming tutorials, you can use a couple of different configurations. The first is the all-in-one approach which entails simply virtualizing everything on a regular laptop or desktop PC based on MS Windows or Mac. I will include products for both that will work great.
The first lab I built to do this tutorial was for a Windows Machine and then I got my hands on a Mac to build out the lab. I will say that the Windows 10 OS has a lot more free utilities than OS X does however, OS X is built on Linux and therefore affords you some features that Windows does not, such as terminal sessions that work simply with other Linux servers. Windows has the capability to do some of this through MS PowerShell but I found it to be a bit more cumbersome to use and the other tools I used don’t really work easily with Windows or OS X.
WordPress on a virtual machine
I chose to use Kali Linux virtualized on both the Windows and Mac machines as it is honestly the most comprehensive penetration tool I have found on the internet, that is widely accepted without the fear of bringing tons of malware into my test environment that I don’t want. But more on that in another episode. Below are a list of apps and utilities I used to perform the testing tutorials I will be releasing in future episodes.
Offensive Security was born out of the belief that the only real way to achieve sound defensive security is through an offensive mindset and approach. Kali Linux is one of several Offensive Security projects – funded, developed and maintained as a free and open-source penetration testing platform.
We recently announced the release of the new AlienApp for Box in USM Anywhere, which uses the Box Events API to track and detect detailed activity on Box. This new addition to the AlienApps ecosystem provides an extra layer of security to cloud storage services that many enterprises are outsourcing to Box. Beyond monitoring and data collection, USM Anywhere offers early detection of critical events and alerting, thanks to event correlation and business intelligence.
AT&T Alien Labs researchers have devised a set of 18 new correlation rules combining Box Events API features and other security indicators. This set is part of the AT&T AlienLabs threat intelligence feed and is included in every USM Anywhere installation.
USM Anywhere provides a fully configurable visualization panel of events retrieved from the Box Events API. It can be used to get a quick summary of the activity that users and applications performed with the Box service, as well as discover common user actions that may pose a security risk for the account.
Configuring USM Anywhere to work with a Box account is easy. It can work with most authentication methods. It is possible to configure App Token Authentication or JWT tokens, which use a private key to sign the requests. The visual interface guides the user through the credential configuration steps, shows a settings panel, a history of actions performed on schedule, and a brief status of the application.
Moving data security to the cloud
Box enables companies to store their data in the cloud. This moves the scope of data security to platforms that can be accessed from the Internet at any moment and by multiple accounts at the same time. Brute force login attempts or spraying password attacks are among the most common intrusion mechanisms.
However, is all data treated the same way? How sensitive data is transmitted and stored is a major concern for enterprises. For instance, companies in the financial business aiming to meet PCI compliance using cloud storage services need to confirm to their clients that credit card data is always protected from unauthorized access and available to be consumed at any time. Encryption is also a necessity to keep data at rest not accessible by unauthorized users. Box allows customers with special requirements to customize the encryption algorithm applied to their data, meeting performance requirements or facilitating compatibility with their applications.
Box provides a feature called Content Security Policy that allow companies to manage their sensitive data in a special fashion. It automatically detects digit strings matching a social security number or credit card data formats and enables automation of notifications or special storage. This type of features plays a leading role in data security and management for companies handling large amounts of data.
To prevent adversaries with access to a Box account to craft or remove the Content Security Policies configured by the company, USM Anywhere alerts when any of these objects are deleted.
All these alerts are compatible with the set of features for security management utilized by USM Anywhere, including automation of actions, alerts suppression, reporting, or piv
I was watching a wonderful webcast by Marie Forleo. It was part of her “Copy Cure” course, and if you are unfamiliar with Marie and her work, take the time to explore some of her wisdom. Her webcasts are gems, particularly if you work in the consulting space.
During the webcast she mentioned a phrase that should be at the top of mind for every InfoSec professional: If you confuse them, you lose them.
Think about the last meeting you had, or the last message you wrote. Was it truly as clear as it could be for its intended audience?
Think of the following example:
An executive received the following E-Mail –
Take a moment and think about how you would respond to the executive who sends this message to you and asks “Is this real, or a scam?”
Most of us InfoSec professionals would probably chuckle that the executive doesn’t immediately recognize this as a scam, but that is the first failing of our approach.
When I see this, I assume that the exec recognizes that something is not quite right, and is sending it to the subject matter experts for advice. This is definitely more preferable than if the person just clicked the link and then proceeded with the frantic “Oops, I messed up” phone call, or worse, does not report the error to anyone hoping that no one notices.
Here is where we InfoSec professionals often make the mistake that creates the confuse-and-lose problem.
Would you simply reply: “It’s a scam, delete it”? That certainly gets the message across, and it allows you move on with your day, but does it help the exec? Does it teach anything, or does it add to the confusion, making the person no richer than when they contacted you?
Think of when you go to the dentist because of a pain, and the dentist responds with “It’s nothing”. Do you feel any better knowing that the pain will not progress into the full agony stage, or would you like to know more? Just as I would ask my dentist “How do you know it’s nothing?” the executive to whom you just said “It’s a scam, delete it”, will probably have the same question. How do you know it’s a scam?
Imagine, however, if you sent the following response:
This is what is known as a credential-theft scam. If you followed that link and filled in the information, your username and password would have been stolen.
The phone number is a non-working number, and the link attempts to connect to a .do domain (which is located in the Dominican Republic, not a Microsoft site).
Please delete it.
Thanks for checking with us.
Here is a sample of the fake site:
In this hyper-sensitive cybersecurity environment, even the busiest executive will appreciate the explanation and enjoy a better understanding of what we do to protect the company. This eliminates the confusion, and it also provides a real-world example of the lessons we teach in the security awareness campaigns that are required by many companies.
Wouldn’t it be great to know that you are providing the valuable service of not only protecting your organization, but also communicating in a way that reduces confusion and eases the perceived pain of cybersecurity? Instead of the phrase “If you confuse them, you lose them”, perhaps we can turn it around to “If you teach them, you reach them”.
Stock sales and trading play a huge role in the U.S. and global economy. Stock exchanges provide the backbone to the economic infrastructure of our nation, as they help companies to expand when they’re ready by offering the general public a chance to invest in company stock.
However, investing in the stock market can be a gamble. You need to understand the market and know what you’re doing in order to receive a return on your investment, which is why many people go through stock brokers.
In order to understand the market and make predictions about it, stock brokers and investors pay close attention to data that helps them understand market trends and where smart investments may be waiting.
However, over the last few years, advancements in technology have provided investors with a new and valuable tool to make informed investments: artificial intelligence. AI has seen a huge amount of growth over the last decade, and it has been adopted in the financial sector for its ability to process data and discover trends.
Machine learning algorithms can track patterns within data and make it easier for investors to make better decisions faster. What does AI’s role in investments mean for the future of stock market analysis?
The stock market moves faster now than it did in the past, which means investors need to do the same. Oftentimes, investors are up tracking the pre-market before the market even opens in order to analyze the volume and movement of stocks, as this often changes soon after the market opens and throughout the day.
Investors are constantly analyzing mass sets of numbers, including stock prices, gains and losses, and the volume of stock movement at any given time. To get a good feeling for how stocks are or will be performing, brokers and firms will add stocks to a watchlist and track them for months to understand their movement in the trade.
This process requires the investor to keep track of trends and numbers over long periods of time; however, machine learning has begun to take over some of these steps. AI technology now provides investors with the market analysis history for potential investments, giving them the information they need to make data-driven decisions. The algorithms gauging market trends are able to simplify the process of gathering the information needed to make calls about future stock performance.
Although machine learning technology is able to make better and faster predictions based on data, there is an increased need for people who are able to make judgement calls. AI can interpret new information and analyze it against the context of stock market movement in the past, but it’s not capable of predicting market outcomes for information that it does not have or that hasn’t happened yet. This means people will continue to have the role they’ve always had, which is to find unique insights that will determine the data that is yet to come.