import test blog

admin

Recent Posts

Accountants can't be auditors in software development and test

Posted by admin on Apr 25, 2018 9:19:35 PM
Auditors_2L.jpeg

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Each week I come across organisations which have outsourced software development and testing process to one company. These companies are responsible for not only building these applications but also for testing them to ensure they work the way their clients intended them to. It’s like the fox guarding the hen house.

In a financial world organisations are restricted from using the same company for accountanting and auditing. The reason for these restrictions is to protect people from creating biases especially when you've been working and living that business for some time. In application development, by the time you are ready release new software you are too close to see the forest from the trees.

At Bugwolf, we provide our clients with an independent fresh set of eyes to audit your software during a digital delivery cycle, prior to releasing software to your customers, and after an application is released to production. These teams test products with no existing relationships with the development team, and we believe your accountants can’t be auditors when building and testing software.

 

Read More

Accessibility testing and universal design

Posted by admin on Apr 25, 2018 9:19:35 PM
Life-of-Pix-free-stock-bridge-lighthouse-peoples-PaulJarvis.jpg

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Universal design is based on the concept of taking the needs of diverse users into account right from the initial design through to the final product.  In order for universal design to be effective it must follow certain principles.

The first principle is that of equitable use.  This means that an application is designed to be usable by people with different levels of ability.  This includes not stigmatizing or excluding particular users while providing for equivalent ways in which people with different abilities can use the software.

Flexibility is also an important part of universal design.  An application or website should be flexible enough to accommodate people with different physical challenges.  It should also assist the user in being accurate and precise.  This isn’t a new idea and has been used in the development of hand tools for at least a century.

When it comes to software, flexibility is achieved through communication.  An application or website should present, or at least be capable of presenting, the same information in different ways.  These ways need to be compatible with the possible limitations that users might have.  This includes such things as audible as well as written instructions, and the ability to present information in ways that are accessible to those who are physically impaired.  Communication is a two way street and any application should be capable of receiving input in various ways.

And finally, accessibility is and always was a specialized activity which begins by eliminating complexity.  Such simplification includes intuitive design attributes, such as placing the more important information first and keeping consistency with regard to input and output.

Accessibility testing should take all of the above into account as part of a comprehensive UX testing process.  By adhering to the principles of universal design throughout the software development cycle, developers can be assured that the application or website under development meets a high standard of accessibility.

Read More

A+ testers are the key to fast and effective testing

Posted by admin on Apr 25, 2018 9:19:35 PM
apple-iloveimg-resized.jpeg

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Steve Jobs was one of the most unique and innovative businessmen to come along in a very long while. He was also a perfectionist, both in the products he created and the people he hired. He set high standards and refused to compromise, yet many would consider his standards to be rather unconventional. For one thing, he cared less about the bottom line than about the quality of what his company produced and he achieved high quality products by hiring high quality people. He often interviewed applicants himself and was dedicated to finding the best talent possible. He looked for A+ players.

What are A+ players? A and A+ players all have certain characteristics. They take the initiative and are innovative. They aren't afraid to fail and can accept correction without ego getting in the way. A players set high standards and they are competitive, even though they can collaborate easily and often do. They also expect a great deal from their coworkers and can sometimes be workaholics. To the A player, the job isn't just something that needs doing or a way to make money, it is a measure of self worth. 

At Bugwolf we are dedicated to recruiting and retaining A+ testers, people with the skill sets and the motivation to perform high quality work. We recruit them through a rigorous vetting process. And we stimulate their A+ personalities by recognition and reward and by gamifying the testing process, thereby giving them the opportunity to excel. In this way, we can assemble elite teams who can hunt down bugs faster and at less cost. So that clients can have their websites, mobile apps or software skillfully and securely tested in an A+ environment.

Read More

A short history of user testing

Posted by admin on Apr 25, 2018 9:19:35 PM
Straight_Ahead.jpeg

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Although, Taylor wasn’t all that interested in usability. He was more concerned with increasing worker efficiency.

It wasn’t until World War Two and the need to help fighter pilots respond quickly to changing circumstances, that the concept of user interface became really important. The idea of intuitive layout, a centerpiece of modern usability, made the P-51 Mustang one of the easiest planes to fly, no matter how challenging the environment. The need to mitigate the stress of combat led to the development of procedures to collect and analyse information on how people interacted with many different types of machines and weapons.

Still, the concept of usability didn’t instantly take off in the digital industry that developed after the war. The limited nature of computers in the 1950s and 1960s demanded that users adapt to the computer’s needs, rather than the other way around. But, research continued and user testing became increasingly more important as computers and interface design became more sophisticated.

Personal computing made user testing a vital element in software development. The advent of windows, desktops and pull down menus provided the basic tools that opened up the Internet to the masses. User experience testing increased in importance with every technological step forward. UX testing now exists at every level of software development. And applications that do the testing must now be as user friendly as the software they are designed to test.

What began as a way of making people more efficient in the service of machines has become a way of making machines more efficient in the service of people. There’s quite a distance between Frederick’s original ideas and the desire to make application interface as easy and stress free as possible. Yet time and motion are now in the service of the user. The modern concept of usability is a testament to the human capacity to build an increasingly interactive world.

Read More

A short history of agile testing

Posted by admin on Apr 25, 2018 9:19:35 PM
software tester melbourne

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Fundamentally this means that policy, processes, and even beliefs are developed and derived from the ideas and influences that exist at the time of their creation. And they carry that lineage. Couple this with the fact that no one can predict what the problems of the future will be, any more than the Wright Brothers could have worried about airframe vibration in jet aircraft at supersonic speed. This means that whether we like it or not, every idea, process and procedure will become obsolete at some point in time.

Unfortunately groups, and particularly group leadership, usually doesn’t understand this. To paraphrase Eric Hoffer, it is the desire of power to give its edicts the force of natural law. We see this sort of thing all the time in the political arena, where the legislature passes laws in an attempt to use police power to “solve” psychosocial issues.

This same sort of mentality of, “impose a policy and that will solve it,” can be seen in the corporate world as well. It is a dramatisation of the desire for simple solutions and easy answers when the universe is anything but simple and answers are hard to find.

This “corporate mentality” got along fine back in the days when doing any worthwhile number crunching required a computer the size of the Johnson Space Center. That is, way back in the good old days of the horse and buggy and the IBM 701.

Today, however, with the rapid advancement of software applications, not to mention the miniaturisation and portability of the hardware that runs those applications, the old systems like Waterfall and others are simply too cumbersome.

That’s why agile testing was developed. In the old style of testing, goals are set and that’s it. Testing was done using procedures developed within the framework of “edict with the force of natural law.”

In agile testing, goals emerge from self organising groups that form in response to conditions. In the old way, testing was just the last procedure before release. In agile testing, the testing team acts as the “headlights” of the group, pointing the way forward through interface with end users.

While the old methods of testing won’t be thrown away any time soon, they do contain some glaring unwarranted assumptions. Among those assumptions are the idea that requirements won’t evolve from those presented in the original requirements documentation.

Another critical problem is that testing procedures take place on the completed software, this forces the testing unit to operate on a “surprise to crisis” basis as new builds and bug fixes cause a cascade of problem that result in massive retests, delays in launch and most importantly the resultant financial implications.

Agile testing, on the other hand, takes place directly in contact with the client, and the application is created in small increments that allow for easier testing. It is also easier to integrate tested and functioning units than to reinvent the wheel every time changes are made.   

The whole idea of agile testing is to provide a system of software testing that operates as a front line endeavor. The purpose is to bring testing down out of the ivory tower and run it as a real-world operation controlled by real people and not edicts from on high.

Read More

7 ways to improve mobile user experience

Posted by admin on Apr 25, 2018 9:19:35 PM
Moblile_Friendly.jpeg

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Keep it simple

Virtual keyboards aren’t as easy to use as the real thing, so avoid the need to enter text, whenever possible. Use drop down menus for complex actions and toggle icons for simple actions. Always use the least complex input method that will accomplish the task.

Good mobile design is clear, intuitive and inviting. Simplify design and avoid confusing images, colour schemes and fonts.

Keep it easy

This is especially true of navigation. Navigation labels should be unambiguous. Menu items should be intuitively understandable and functions should be obvious. Also, it’s a good idea to make sure that visitors always have an easy way to get back to your homepage, as people often use the homepage as a stable point from which to search. And test for usability before your site goes online. The only way to tell if your mobile site is easy to use, is to test it.

Know what potential customers are looking for

Be aware of the niche culture surrounding your products. Know what’s popular and what is important to customers and help your customers find what they are looking for by presenting pertinent information. It’s never a good idea to treat your website like one of those throwaway snail mail catalogues.

Keep it streamlined

If customers must enter numbers, then provide them with a number pad. Use simple forms that are easy to move through and advance automatically from one field to the next. Forms should also be easily correctable, if the customer makes a mistake.

Keep your calls to action clear and provide an easily understandable user path that enables customers to reach their goals.

Prioritize

The most important information should be the easiest to find, because potential customers will judge your site based on how easily they can find what they are looking for. Which means that it’s also a good idea to keep a search box easily visible.
 

Keep it human

Provide genuinely useful content, not empty fluff or a perpetual sales pitch. Help customers filter search results by informing them of the number of search results ahead of time and don’t be afraid to ask a few questions to ensure that they get what they are looking for.

Keep it customer oriented

Focusing on your potential customer’s mobile site experience through user testing and careful evaluation of usability will pay dividends in the form of conversions. Realize that new visitors will be unfamiliar with your site and help them to browse without the need to provide personal information right up front, and assure a quality user experience with usability testing.

Giving your visitors a pleasant and productive mobile user experience is crucial to online success.

 

Read More

6 Reasons Why Automation Can't Replace Manual Software Testing

Posted by admin on Apr 25, 2018 9:19:35 PM
Guy Hand Coding

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

Testing and quality assurance are amongst the biggest constraints when it comes to delivering software products on time. While customers and clients generally understand that no software is ever truly bug-free when it first ships, a critical bug or error can be devastating to a developer's sales and reputation. 

With so much of the infrastructure we rely on in our daily lives dependent on accurate and unfailing software, it's more important than ever to catch and correct bugs in the code. As such, there has been a big push lately toward more automation in QA testing. 

The argument goes that automated, script-driven testing can test more components more accurately, rigorously, and consistently than human testers can, and in a fraction of the time. That's true, but those who favor automated testing have to contend with the fact that, while automation can do many things better and faster than human beings, there are still many things it can't do at all. 

Manual testing is still a vital part of the QA process and human testers are far from obsolete. In fact, many businesses do not achieve their desired results with automated testing alone. 

The 2017-18 World Quality Report found that automated testing technologies only perform about 15% of common test activities. A hybrid approach that uses automation while covering the gaps with manual testing may be the ideal approach, but it's important to understand the strengths and unique benefits that manual testing can provide.

Don't get us wrong—we're big fans of automation here at Bugwolf. In fact, we've developed a mature automation suite of our own, but that's only one piece of the digital quality puzzle. As long as you've got humans using your website or app, you need a human perspective on what works and what doesn't. 

Here are six of the key reasons why you can't (or at least shouldn’t) write off the manual testing process.

1. Exploratory Testing

When a human tester finds an element of a program that's behaving in odd or unexpected ways, they can stop and dig into it. They can experiment with different inputs and let their own intuition and curiosity guide them to investigate further, factoring in what they've discovered during the remainder of the testing process.

That exploratory testing is virtually impossible to script. Almost by definition, exploratory testing refers to what happens when you stray away from the beaten path, letting "what happens when I do this?" be your guiding principle.

Automated testing is great for finding bugs in the core functions of the software, the code that always gets run, the predictable test cases. Actual users, however, rarely stay within the expected boundaries, and that's why you need manual testers. They will behave just as unpredictably, poking holes in strange places and finding the bugs that hide from automated processes.

In particular, mobile apps have complicated use cases. Think of all the variables that are completely outside the control of the developer that can affect how a mobile app performs – everything from declining battery life to intermittent Wi-Fi signals to sweaty hands. There's no way an automated script can account for all those things. Yet human testers can, and when they find one unpredictable thing that impacts the software's performance, they can drill down on it to learn more.

2. Human Insights

Automated scripts can check to see if code functions as intended, but no script can wrap itself around the gestalt of an app to tell you if it feels right, if it's doing what it's intended to do at a conceptual level. All it can tell you is whether the code was written correctly or not.

Humans are creative and analytical, and manual testing gives you a human perspective on your software's performance and functionality. People can spot misleading or confusing visual issues that scripts would gloss right over.

They can reproduce customer-caught bugs and errors. They can also bring their own experience to the table, with testing informed by past experiences finding software bugs, writing code, or pushing an app to its limits. This empirical knowledge can yield insights that could never be anticipated in advance, or by a machine.

Human testers can also catch issues that aren't bugs by definition, and don't conform to strict pass-or-fail testing standards. Speed, visual noise, ambiguities, and other usability issues can easily slip right by an automated test if they're technically working as intended.

If you want to test the strength of your code, automated testing is well-equipped to do just that. But if you want to see how your software is actually going to perform in the wild, you need human testers.

3. Bugs in the Gaps

An automated test can only do what it's told to do. No matter how powerful or dynamic a scripted test may be, there will be gaps in its testing methodology. 

Automation can't find things you don't know how to look for, and bugs are often found where you least expect them. Many bug testers report that they often discover bugs when they're trying to test out something completely unrelated to what they actually end up discovering.

Scripts are written by humans, which means they're limited by the experience and imagination of the person writing them. It also means that the scripts themselves can contain bugs or errors, which can generate false positives, overlook bugs that would be obvious to a human tester, or fail to test everything that needs to be tested.

4. The Limits of Scripting

Scripts have other limitations, beyond the skill and knowledge of the people writing them. There are external constraints on scripting as well: the time, labour, and costs involved in building robust, comprehensive directions for an automated testing process.

Some scenarios are too complicated, too expensive, or too weirdly specific to be worth testing with automation. A lot of work goes into writing, refining, and deploying a script that can thoroughly test a large software program, and automation is frequently too expensive to use for small projects.

Good testing is repeatable but also variable. Thus, attempting to modify automated scripts to perform variable testing on a just-discovered bug is almost never efficient, in terms of time or money. Manual testing may be slow, but the high costs of setting up and maintaining automated tests cannot be ignored.

5. Validating the User Interface

In many apps and websites, few aspects receive more attention or focus than the user interface. Automated scripts are ill-suited to providing any insights into the quality of the UI beyond verifying that the underlying functionality works as intended.

Manual testers can take the big picture of the software they're testing into account; they can understand not just how the code is supposed to work, but also whether or not the software meets the needs and expectations of the actual people who comprise its target market. In most cases, the UI is the biggest part of this.

Buttons that don't look like buttons, alerts that fail to catch your attention, and text that's too small or stylized to read easily are just a few of the things that scripts are often blind to, but even untrained testers will pick up on immediately.

6. The Development Environment

Automated scripts can perform testing functions with incredible speed, but setting up a script for the first time can be a slow, labour-intensive, and costly endeavour. And in an Agile environment, where bug fixes and other changes are being made as parts of a continuous development cycle, it's hard to fit scripting updates into the process without bringing everything to a grinding halt.

When your work is organized into sprints, it's difficult to keep scripting updates on track, and they tend to lag behind. Even in a more traditional development environment, it rarely makes sense from an efficiency standpoint to allow delays or reorganize priorities, in order to update or modify automated testing scripts.

The Best Way to Run a Bug Hunt

You can get a lot of testing done with automation – and there are certainly many aspects of testing that automation does well – but if you're looking to automated scripts to free you from any reliance on manual testing, you're going to end up shipping out buggy software. You're also likely to miss many areas of potential improvement that manual tests would have identified right away.

The optimal solution for QA is to use both methods. Automated scripting should be utilized for the obvious and predictable use cases, for stress testing, and for weeding out unambiguous coding errors. For everything else, however – the weird bugs that pop up when you play around with the software in an "off-label" way, the UI problems that electronic eyes can't see, the on-the-fly testing you have to do when code changes as part of a fast-paced development cycle – there's no substitute for human testers. People can explore your software inside and out in ways you might never have expected, bringing all of their experience, imagination, and unconventional modes of thinking to bear on the code you've compiled.

Another way to put this: automated testing will catch the bugs that would show up immediately after launch and result in immediate updates and some bad press. But if you want to find the bugs that won't show up until years later – perhaps at some critical moment – you still need real human beings as a part of the process.

Read More

5 IDC Predictions For IT In 2019

Posted by admin on Apr 25, 2018 9:19:35 PM
Man With Phone And Computer

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

International Data Corporation, a leading provider of analytics and advisory services for the information technology sector, has released their predictions for trends and key drivers in the IT industry for 2019 and beyond. Here are five big shifts they're forecasting for the years ahead:

1. Digitised Economy

IDC believes that be 2022, more than 60% of global GDP will be digitised, and that every industry's growth will be sustained by products, operations and relationships that are enabled or enhanced by digital technology. IT expenditures are expected to reach $7 trillion globally through 2019-2022.

According to IDC, the businesses that will thrive in the years ahead are those that invest in digital optimisation on two fronts: their internal operations, and their outward business model. While it's always easier for companies to make internal improvements that to reinvent the customer-facing side of things, IDC says that businesses who don't tackle both side of digital optimisation are in danger of losing up to two-thirds of their market by 2022.

2. Third Platform

IDC predicts that by 2023, 75% of all IT spending will be on the "Third Platform" - their term for smartphones, Internet of Things appliances, and any other device that allows users to interface with cloud-distrubuted software - as companies focus on building "digital native" environments to transform their enterprises.

3. Edge Computing

According to IDC, more and more companies will be seeing edge computing as an ideal way to handle the increased demand for artificial intelligence processing, predicting that by 2022 over 40% of companies' cloud-based systems will involve edge computing and 25% of end-user devices will be executing AI algorithms.

4. New Developer Class

IDC sees a new professional developer class emerging by 2024, driven by the perpetual shortage of developers and the need to keep turning out innovations and improvements. They expect the developer population to grow by 30% by then, comprised mostly of developers who code without the use of custom scripting.

5. AI > UI

If IDC is correct, then Siri, Cortana, and Alexa have been showing us what personal computing is going to look like in the future. They're predicting that a third of screen-based app user interfaces will be replaced with AI-driven voice recognition by 2024 as natural language understanding technologies continue to advance, resulting in more of us operating our devices by having a conversation with them.

You can read the entire report, with all of IDC's predictions, in a publication titled IDC FutureScape: Worldwide IT Industry 2019 Predictions.

Read More

2017 Browser, Device & OS Releases

Posted by admin on Apr 25, 2018 9:19:35 PM
Device__browser__OS_updates.png

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

The evolving digital landscape

Every digital leader knows the importance of rigorously testing websites and apps before release…

But even if you uncover and remediate every bug before release, chances are your customers are still stumbling across bugs on your live digital assets.

That’s because the digital landscape is constantly evolving. Last year there were over 71 significant browser, OS and device releases. So it stands to reason that all digital assets are affected by some degree of “drift”.

For more information on each of the updates, visit the Bugwolf External Environment Updates Board.

This infographic highlights how the landscape changed in 2017:

Browser, OS, Device Updates Infographic
 

Read More

£1.7 Million Blackjack Jackpot Withheld Because Of Computer Glitch

Posted by admin on Apr 25, 2018 9:19:35 PM
man with phone

Contact Us We cut software testing from weeks to days. Let’s talk for 15 minutes to see if we can accelerate your digital delivery too. Schedule a call with our CEO Ash Conway.

He stopped playing when he hit the jackpot—an astonishing £1,722,923.54. He called up the bookmaker, Betfred, to confirm his winnings and went out to celebrate with his friends. Four days later, Betfred called him back to inform him that his big win was caused by a “system malfunction” and that they would not be paying out.

Betfred proposed a deal, offering to cover the £2,500 tab Green ran up at the pub while celebrating and pay him an additional £60,000 if he signed a non-disclosure agreement binding him to a promise not to speak publically about his jackpot win.

However, Green decided in the end to take legal action against Betfred, as he claims that they have refused to provide him with any evidence of the glitch that caused him to win such an enormous prize. The case has since gone to trial at the High Court.

Betfred has issued a statement addressing the case, which reads: “Betfred loves to pay out all our jackpot winners—both big and small. Unfortunately, and as Mr. Green is aware, a new game release suffered a software malfunction in January this year and no legitimate jackpot win occurred. Given that Mr. Green is currently exploring his legal options, it would be inappropriate for us to comment further.”

Speaking to BBC News about Betfred, Green said, “They are quick to take people’s money but when it comes to payout they offered money as a gagging agreement. How many are there out there who have signed similar agreements? Even if there was a glitch I did nothing wrong. I played that game and pressed a button.”

At a preliminary hearing last week, Betfred claimed that they did not have access to the game data that would show evidence of the glitch, and could not compel the game developer to provide it.

Read More

Something Powerful

Tell The Reader More

The headline and subheader tells us what you're offering, and the form header closes the deal. Over here you can explain why your offer is so great it's worth filling out a form for.

Remember:

  • Bullets are great
  • For spelling out benefits and
  • Turning visitors into leads.

Subscribe to Email Updates

Recent Posts

Posts by Topic