What is Automated Testing

Brooks Lybrand

The words “automated testing” elicit many kinds of emotion.

For some it evokes painful memories of blindly chasing the elusive “100% test coverage,” spending more time writing test code than application code. For others it inspires confidence, a reminder of how automated tests saved them from shipping detrimental bugs. For others still, “automated testing” inspires guilt, because you know you should make the investment, but haven’t quite found the time.


If you are in the latter group, don’t worry—we’re going to dive into automated testing, why you should care about it, and how you can get started.

Everybody’s doing it

First, let’s focus on testing. The truth is, everyone tests their software. From developers to customers, everyone plays a part in testing your application. At the most fundamental level, testing software is about providing some input(s), getting some output(s), and then judging whether you get the result you expect.


Every time a programmer runs their code, they’re testing whether it runs without crashing, looks right visually, and calls all the calculations/functions correctly. When a quality assurance (QA) engineer tests code they might be manually walking through a user story, such as registering a new account, and checking that the behavior is correct at every step.


Ultimately, whether they know it or not, your end users test your software too, every time they use it. And you can tell that users are testing your code because, when things go wrong, most people are not afraid to let you know.


So there’s no avoiding it, everyone everywhere is always testing software. But as developers and as a company, you get to choose how you want to test your applications.

Automate it!

What are computers good at? A quick search on the internet will reveal that computers are really good at running repetitive, boring tasks. What is a test if not a set of boring repetitive tasks? Like: open this url, click in the email field, type “mycoolemail@emailsRus.com, click in the password field, type “password1234”, click the “login” button


What are humans better at than computers? Making mistakes, of course! That includes making mistakes doing repetitive tasks. Humans may correctly complete the checklist of a formalized test once or twice, but with continued repetition, chances are that they’ll stumble along the way and ruin the test.


Computers are actually very bad at making mistakes, as long as they receive crystal clear instructions. So, if computers are good at repetitive tasks, and bad at making mistakes, they make the perfect candidates to be your primary testers!


An automated test uses code to tell a computer how to run different functionality, check the results, and let you know whether or not everything went as expected.


There are many different types of tests, all serving different purposes. One of the most advanced and powerful types of tests is an end-to-end (e2e) test, where you run your code in the same environment as your users, simulating their actions as closely as possible. In the world of web development this means running your site in an actual browser, while the computer clicks around and types things to see how everything behaves. More on this later.

Why doesn’t everyone have automated testing?

At this point automated testing seems pretty attractive. So why isn’t everyone doing it?

There are several reasons a software engineer or team might not be using automated testing. For example, setting up the testing takes time, and sometimes the budget/scope of the project doesn’t allow for it. Testing can also take a lot of work to set up if you’ve never done it before: there are libraries to think about, environments to configure, and competing testing philosophies to consider. Additionally, if you’re dealing with databases or APIs, you typically have to mock them, or create a test database so you don’t accidentally erase all of your users’ data, or blow the roof off your server bill.

So is automated testing worth it?

Here’s everyone’s least favorite answer: “It depends.”


Are you working on a side project just for yourself? Then probably no. Are you building financial software that is handling super sensitive information for paying customers? Then yes, you’d likely want to use automated testing.


There’s a broad spectrum of testing needs, and all sorts of situations in which automated testing may or may not be a good investment. Overall you and your team should take the time to really consider if it’s worth it for you.

One thing that can make your decision easier is realizing that testing is not an all-or-nothing proposition. You don’t have to have 100% test coverage, especially not at the beginning. Since ultimately, given enough time and enough customers, all of your code does get tested, ask yourself: “Would I rather have a computer test this first, or my customer?”.


So, it’s probably a good idea to have at least some automated tests in your application, which leads to the question…

Where should I start?

There’s no perfect answer to this question, but there are several good options


Start with something easy

It will probably take longer to write your first test than the second one. For the first test, you have to acquire all the tools, configure the environment, read the documentation, and so on.


It can be really helpful to start with something simple to get that first win. Test your `addTwoNumbers` function first: you’ll actually feel great getting a test suite to run your code and tell you whether or not you remember elementary math.


The actual test can be something as simple as:


test('adds 2 numbers together correctly', () => {   const result = addTwoNumbers(2, 2);   const expected = 4;   expect(result).toBe(expected); });


Start with the most important piece

What part of your application would cause the most stress if it failed? This could be your checkout flow, your authentication, or your automated email system. Having confidence that this mission-critical piece of code won’t break without you knowing first can be enough motivation to get you going.


Start with the thing that always breaks

Do you have a piece of code which just falls apart any time someone touches it? That one function, first written four years ago by the developer who’s long gone, which has been patched over more times than your git history can tell you? That can be an excellent place to start. You’ll feel great knowing that your test identified the issues with this problem code and that when you next update it, you’ll quickly know which changes might cause it to fail. As an added bonus, you might actually understand the code a little better by thinking about how to test its behavior.


Start with new features

Sometimes the easiest way to implement a test is to do it when first adding the feature. In the next standup meeting or the next time you get a request for a new feature, try to carve out some time in the schedule to add a few automated tests.

When should my tests run?

Automated tests serve a major purpose: they let developers know they made a mistake before they share that mistake with their users. So at the most fundamental level, tests should run before the code gets to your user. Here are some ways to run your tests strategically between the time you write code and when you ship it:


While you’re developing

Run test suites in “watch” mode, which means any time you make a change to your code, the appropriate tests run automatically and give you near-instant feedback. Some developers go so far as to write their test first, and then code the feature until the test starts passing. This is known as Test-Driven Development (TDD) or “red/green/refactor”.


Before you commit your code

Run all the tests that depend on the files you changed right before you commit your code. You can automate this step easily with git hooks and, if you’re using JS, you can leverage great tools like husky to set up your hooks.


In CI/CD

Running your test in CI (Continuous Integration) and/or CD (Continuous Delivery) typically means running it somewhere in the cloud, off of your computer, in a special environment that is very similar to your production environment. This can take a bit more work to set up, but it has the benefit of not interrupting your workflow while ensuring that erroneous code doesn’t slip into production.


If you want to read more about CI/CD, here is a great introductory article.

What tools should I use

There are many excellent tools available for pretty much every language. If you are in the JavaScript or web development world, you can use the classic Mocha and more modern Jest testing framework. If you want to run your test in a browser as a typical user would, there are lots of great e2e testing suites available. Selenium is one of the oldest of these tools, and there are many newer options like Nightwatch or Cypress.


BugReplay is another great tool. BugReplay started as a tool for manual testing where QA teams could record videos of bugs with time-synced Javascript console and Network traffic logs. BugReplay now has some amazing integrations with several modern e2e test suites. Whenever an automated test fails, BugReplay automatically generates a bug report, and the developer or QA team can simply watch the video and view the JS console and network traffic logs to see what broke and how to fix it.


Reviewing multiple automated tests in BugReplay

Cache me in

BugReplay HQ

So we all know how caching works, right? Let’s have a quick overview to be sure.


Caching uses a high-speed data storage layer to store a subset of transient web data so future requests for the same data serve up fast, without the need to access the data’s primary storage location. Cached data usually holes up in fast-access hardware like RAM (Random-access memory), the primary purpose being to increase the speed of data retrieval by reducing the need to access the underlying slower storage layer.


Many businesses and organizations host their own cache using software like Varnish, while others opt for Content Delivery Networks (CDN) like Cloudflare that scatter cache across distributed geographic locations. Then there are content management applications like Drupal with built-in cache. Lots of permutations, the same purpose: boosting performance on both the server and client sides.

Know Your Poison

Ok, so now we know how caching works, but what is cache poisoning? Let’s get down and dirty.

Web cache and DNS cache poisoning work by sending a request that results in a harmful response. This response is then saved to cache, which serves it up to other users in the form of bogus internet addresses (DNS corruption) and even complete web-cache misappropriation and application takeover.

HTTP protocols within the web-caching mechanism only perform integrity checks on the server side. This lack of validation and authentication gives attackers an opportunity to poison the cache repository.

Cold, Hard, Cache Key

When a cache receives a request for a resource, it must decide if it already has a saved copy of this exact resource to reply with or if it needs to forward the request to the application server to retrieve the resource. Identifying whether two requests are attempting to load the same resource is tricky; matching requests byte-for-byte is ineffective because HTTP requests are chock full of immaterial data, like information about the user’s browser and system. Cache Keys allow us to sidestep this problem by defining what information in a visitor's HTTP request matches resources in cache.

Composed of a small number of specific HTTP request components, Cache Keys fully identify the resources the user is requesting. The upshot of Cache Key composition is that a cache finds equivalences in two separate requests, responding to the second with data cached from the first. The vulnerabilities here are pretty obvious.

In theory, sites can use the Vary response header to specify additional request headers to key. In practice, Vary header usage is pretty scant; CDNs ignore it completely, and most people don't even know if their application supports header-based input.

Venom, they ain’t gonna know what hit ‘em

Cache poisoning isn't an end in itself, but rather a conduit for the exploitation of secondary vulnerabilities like XSS (cross-site scripting). By exploiting secondary vulnerabilities, attackers can progress from simple single-request attacks to more complex exploit chains that hijack JavaScript, skip across cache layers, subvert social media, and corrupt cloud services. The toxic tailoring of data responses through cache poisoning attacks essentially turns your cache into a highly effective exploit delivery system.

HTTP Header Hell

To actually poison web cache and deliver an exploit to all subsequent visitors, hackers must ensure that they send the first request to the homepage after the cached response expires. A crude means of achieving this is to use tools like Burp Intruder or a custom script to send large numbers of requests to the site’s repository. The more discerning attacker, however, may reverse engineer the cache expiry system to predict expiry times and monitor available data over a prolonged period. This is a bit of a commitment, even for the most determined hacker.

Unfortunately, many websites don’t require this level of commitment before they give up their data. Response headers often specify the age of their most-recent response plus its expiry date, meaning attackers know exactly when to send their payload to guarantee that their response payload caches.

Life Imitates Art

In an ironic twist of fate, the much-beloved Mr. Robot TV show fell victim to a cache poisoning attack a while back. Embarrassingly, it took the intervention of whitehat hacker, Zemnmez, to highlight the vulnerability: a cross-site scripting (XSS) flaw with the potential to expose the user’s private Facebook information.

This ethical hacker actually had difficulty in alerting the show’s creators and website developers about the flaw and had to go to some lengths to bring it to their attention. Fortunately for Mr. Robot, unlike the show’s protagonist, plenty of hackers are out there trying to help shore up weaknesses in web defenses to prevent attacks like cache poisoning. This attack serves as evidence that even the (seemingly) most savvy among us can fall victim to this type of attack if they let their guard down.

Face Reality, Mr. Anderson

Once thought too complex to be a real threat, we now know that web cache poisoning through unkeyed input exploitation is a reality, while DNS poisoning is a relatively easy means of accessing a website’s data pipeline.

But what can we do to protect ourselves? There’s no simple fix and disabling the caching mechanism in its entirety is not feasible for most of us, but there are options:

  • Heavily cache static responses, such as *.js, *.css, *.png files, blog posts, landing pages, and consistently identical pages.
  • Audit your DNS public data including zones, records, and IPs.
  • Keep your DNS servers up-to-date to avail of the most recent patches against known vulnerabilities.
  • Strengthen your defenses against Cross-Site Scripting (XSS) attacks by sanitizing your input: Never output user data inputs to your browser without checking for malicious code.
  • Understand and control how and where caching takes place: You can disable frameworks that implement their own caching and handle them at a singular point (for example, CloudFlare).
  • Hide your BIND version and disable DNS recursions: Security through obscurity.
  • Use Domain Name System Security Extensions (DNSSEC) to verify DNS data integrity and origin.
  • Isolate your server from the rest of your application, whether cloud-based or your own dedicated server.
  • Where feasible, restrict caching to static responses and be careful how you define static so that attackers can't trick the back-end server into retrieving malicious versions of static resources.
  • Avoid using web user inputs (such as HTTP Headers) as Cache Keys.
  • If you don’t need a header for your website to function, lose it.
  • Understand the security features and implications of any third-party applications and technologies you allow into your environment.
  • When possible, employ web application scanners to weed out malicious code and board up exposed back-door access.


Cache poisoning is a rapidly evolving cash cow for hackers. While these preventative measures form a strong line of defense, it is vital that you keep your ear to the ground about cache poisoning. Be vigilant, be suspicious, and take the time to create robust fortifications to prevent hackers from weaponizing your cache.

It’s time for the red pill Mr. Anderson. If you know it’s toxic, don’t be slippin’ under!

A Slice of the Action: 5G network slicing and the implications for your business

BugReplay HQ

Seems like everyone is scrambling to prepare for the arrival of the first commercial 5G networks, supposedly sometime this year. Don’t panic if you're one of the many nodding your head during 5G slicing conversations, pretending you know all about it. We know, it’s a lot. We’re going to drill down and see why it’s the ‘it’ topic, and figure out the right moves to ensure that your virtualized infrastructure is sitting pretty.


So, what is 5G and why am I slicing it?


Simply put, 5G networks are the next generation of mobile internet connectivity. Faster speeds, more reliable connections, low latency, more bandwidth, and denser network coverage are just a few of the potent 5G features with obvious benefits for IoT applications. All good so far, but why slice it?


5G network slicing allows the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each slice is an isolated, end-to-end network tailored to meet the diverse needs of particular applications.

In a virtualized network scenario, the physical infrastructure takes a back seat to the logical (software-based) partitions in the driver’s seat, dynamically allocating capacity to specific resources as the need arises. Sharing common resources such as storage and processors, network slicing lets operators structure several services on the same network, ensuring service reliability, quality, and enhanced security.

Network slicing isn’t new nor is it specific to 5G. As a concept, it’s been thrown about since the arrival of 4G, however, the difference is that we now have a standardized definition of how a slice will work end-to-end within the network. The upshot is that we can build a logical network consisting of different network functions while defining the service parameters of each function within that slice: bandwidth, latency, security, and so on.

Providers can offer this new functionality to enterprises as a new means of differentiation and a way to compete with webscale providers like Google, Amazon, and Microsoft. That’s the theory anyway.

What’s in it for me?

One of the major benefits of 5G network slicing is the ability to deploy only the functions necessary to support particular customers and particular market segments. This process should result in direct savings versus the requirement to deploy full functionality to support applications that use only a portion of that functionality. A targeted, tailored deployment = faster delivery = shorter time-to-market.

Newer technologies, in particular, lend themselves to dynamic, partitioned networks. Augmented reality is case in point, requiring high data speeds, throughput, and low latency. On the other hand, a smart home security application relies on impenetrable data and security parameters. 5G network slicing provides the elasticity to meet these varied requirements in a cost-effective and efficient manner.

No matter how thin you slice, it’s still baloney - or is it?

This all sounds great; higher speeds, lower latency, cost savings...what’s the catch? The telecommunications top-brass would have us believe that the winner of the mad scramble to the top of the 5G pile will reap untold rewards, but there are many that believe this is just hype—a mythical marketing monster designed to scare us into needlessly forking out for new technologies, new training, new infrastructures. Granted, few disagree that 5G will eventually replace 4G, that’s a no-brainer, but the speed at which this change will occur and the potency of the implications of this change are very much in doubt.

5G standards are still at the discussion stage with issues such as net-neutrality yet to find consensus. The current appetite for 5G slicing remains low, currently hinging on pricing structures, business use cases, processes for re-architecting entire enterprise networks while maintaining the security and integrity of infrastructures. Some are even discussing the possibility of retrofitting existing 4G infrastructures to meet requirements once 5G launches for real. Not such a crazy idea perhaps, given that network slicing is technically possible on 4G networks, meaning businesses could sidestep a 5G outlay and any teething problems that invariably accompany new technology rollouts. And what if another identifier system comes into play when we’re slicing up the network? Theoretically, this would create a separate internet for each application. The internet as we know it today would change completely, resulting in major security headaches.

OK, so we’re playing devil’s advocate here, worst-case scenario stuff, but there are real limitations and caveats to 5G and 5G network slicing as they stand today. That said, there are few that question the momentum of the 5G movement. It’s coming. Of that, there is no doubt. The question is, are you ready to grab your slice of the pie?

Bug Bytes: Narrate your own BugReplay report

BugReplay HQ

Ever find yourself recording a bug report for your developers or IT department and wish you could add notes or simply talk them through the issue? Ever find that the visual walkthrough alone doesn’t quite communicate the problem to the fullest extent?

We know your pain and we’ve done something cool to eliminate it.


Introducing the new BugReplay bug report narration feature!


One simple mouse-click lets you enable audio mic recording to create a voiceover to accompany your bug report.




Communicate your difficulties in real-time, without the need for additional files, lengthy emails back and forth, in-app chats, etc. You can explain the problem and how it affects your experience of the app/site, and what you want developers/IT pros to address to make your life easier. Nothing lost in translation, no misconceptions, no running down the wrong track, no barking up the wrong tree, you get the idea. Time saved. The right people can quickly ascertain the issue and make the right moves to fix it.


Seamlessly contextualize the problem as you experience it without the need to painstakingly itemize each step or check for spelling or nomenclature errors.

Basically, it’s TiVo for bugs!


Written reports may omit important information, be vague or poorly written. Video recording on its own may leave developers scratching their heads about your issue. Live-comment gives you the best of both worlds; visual evidence of the issue and a narrated account of how it affects you and your work.


Bug reporting can be intimidating when you’re not sure if what you’re experiencing is a feature, user error, a bug, or simply how the product/site/app should work. Our new narration feature eliminates the fear of looking foolish by allowing users the option to verbally recount their experience and clarify their concerns.


Added bonus: In the process of watching and listening to user experiences, developers and IT specialists may discover areas for improvement, opportunities to strengthen defenses, or simply uncover possibilities to pimp out existing feature specs. With this new level of clarity and insight, the sky’s the limit.


Listen, talking through your issues really does help. Hit that Record Mic button next time and get it off your chest, you’ll feel so much better!

Philosophy of Mr. Robot, 101

BugReplay HQ
Control is about as real as a one-legged unicorn taking a leak at the end of a double rainbow.

The sage words of one Elliot Alderson, aka Mr. Robot. Of course, he also said “control can sometimes be an illusion. But sometimes you need illusion to gain control”. Control, therefore, or the semblance of control, is key to preventing and fighting a cyber-onslaught. Mr. Alderson has much wisdom to impart to us on this topic. Together, let’s take a brief Fsociety University ‘Wresting back Control’ class to gain a better understanding of these insights.

Module 1: Recognizing the dangers of the insider

With enough time, a hacker will find the flaws and there is no one with more time to uncover these flaws than someone already on the inside, just ask Elliot Alderson. Most organizations still focus on developing safeguards against external online attacks, using defensive tools like anti-malware, external firewalls, DDoS attack mitigation, external data loss prevention, etc. It’s a frightening but all too real fact that the majority of cyber attacks come from trusted employees or former employees or associates. Whether it be for financial gain, reputational elevation, or bad blood, insider attacks are the most common and the most damaging of all malicious offensives. So how do we spot and ultimately prevent them?

  • Have external and internal penetration testers examine your defenses to identify weak spots.
  • Provide regular training to employees on safe data management and internal cyber risk mitigation.
  • Put tight controls on what information your employees can access.
  • Carefully record what goes in and out of your network.
  • Take extra notice of the actions of any employees who may have just received termination notices and feel they have nothing to lose.
  • Watch for employees that suddenly become extra enthusiastic in their work, volunteering for extra duties, expressing sudden expertise in areas other than their core role. Hey, we’re not saying shoot volunteers, just be savvy!

Module 2: Protecting mobile assets

As seen during the hack of the FBI’s temporary office in ECorp during season 2, it is imperative that we pay attention to the vulnerabilities of mobile devices, which provide a conduit to access sensitive data and inject malicious actors into our systems. Some simple measures to help shore up mobile defenses include:

  • Keeping an up-to-date inventory of devices and who uses them.
  • Ensure that devices, particularly laptops/tablets, etc. have up-to-date firewall and antivirus protection installed with personnel designated to manage this throughout the year.
  • Use encryption software and implement a top-down data encryption/decryption program for all sensitive company data.
  • Use biometrics and identity control software to ensure that only assigned personnel can access mobile devices.
  • Install mobile security applications on all mobile devices to constantly run security checks throughout the operation of that device.
  • Ban the use of public WiFi networks.
  • When you retire devices wipe them clean of all data before disposal.
  • And, seriously, stop using 12345 as company passwords!

Module 3: Don’t forget the mundane

We can get hung up on the intricate and elaborate cyber violation and how to defend against them, forgetting that our most sensitive data often resides in very ordinary and very exposed places. Data-loss prevention (DLP) providers estimate that almost 90% of an organization’s intellectual property resides in email. With another 90% of business data loss occurring through phishing and social engineering scams, it’s clear that email is another key area. Moreover, as we saw in Season 4 of Mr. Robot (and the real-world Panama Papers), valuable data can also reside with our business partners and suppliers.


So, what to do?

  • Train your staff, partners, and suppliers best practices in email security
  • Push your encryption policies to your business partners and suppliers, making it a mandatory element of communications.
  • Train your staff not to hoard unnecessary or sensitive emails.
  • Warn employees to keep an eye out  for scam emails that request a password change as part of a security shakeup. Tell them if there’s any doubt they should visit the provider’s website for accurate security updates.
  • Use identity verification software to ensure that the sender is who they claim to be.
  • Be wary of Web-based email. If you use a Web-based browser, encrypt the connection with Secure Socket Layer (SSL) protection and always check for an https URL.

Module 4: Tempus fugit

The DDoS attack in Season 1 demonstrated the importance of a stealthy response. Employing a well prepared, masterful strategy to counter the attack, the team was back on track in 5 hours. This is a realistic recovery period if you are well prepared but with the typical DDoS strike manifesting in wave after wave of individual, disparate attacks, it can be difficult to know what to prepare for. Let’s take a look at how to establish a basic line of defense:

  • Spot the signs:
  • A sudden, sharp increase in website traffic
  • Slowing of performance
  • Scrub the ISP. Deal with the attack in a remote environment, removed from your main infrastructure. Reach out to your internet provider, who may have the means to scrub the originating IP and block further malicious attacks.
  • Set up routers and firewall policies to filter non-critical protocols, block invalid IP addresses, and cut off access to high-risk areas of your network. Many firewall providers provide anti-DDoS technology that lives on the perimeter of your network, detecting and dealing with DDoS attacks quickly and effectively. Definitely worth the additional investment.
  • Route malicious traffic into a black hole until the situation abates. The difficulty with this approach is that this blocks all traffic, good and bad.
  • Investigate using a Content Delivery Network to create replicas of your website for different locations.

POP quiz

  • Did you study all modules carefully?
  • Did you examine your cyber-threat policies in light of this new information?
  • Did you add Mr. Robot Season 4 to your watchlist?

You have successfully completed this ‘Wresting back Control’ class. You are now ready to infuse your cyber-threat defense strategy with the wisdom of Mr. Robot.  And remember, 'the devil is at his strongest while we're looking the other way.'

Comments