Web Scraping in 2022 & Beyond

Web scraping has been coming into the limelight in recent years due to the rising interest in data. Businesses across the globe have been eyeing automated data collection as a way to enhance their profitability and overall decision making.

We’ve sat down with the Lead of Commercial Product Owners at Oxylabs.io, Nedas Višniauskas, to talk about the future of web scraping. Few people have been as deeply involved with the industry as Nedas, which has allowed him to gain a unique perspective on how it has developed and how it will continue to do so.

What do you think has been the biggest change in web scraping over the last decade? How has Oxylabs participated in these changes?

There have been some interesting changes during the past few years. One of them, I think, has been the proliferation of increasingly sophisticated anti-bot systems. Scraping such websites at scale, in turn, becomes more difficult.

Scraping enthusiasts, of course, have their own answer to these issues, which is to develop dedicated data collection tools. These, while limiting the field of use, can bypass the anti-bot systems and they are constantly being updated for that purpose.

Another important change has been the rising popularity of JavaScript. More and more websites are using it to load critically important data dynamically, which means it’s essentially unreachable without browsers.

Headless ones, therefore, are a necessity. At the same time, that means infrastructure costs are rising as headless browsers take up much more computing power and traffic than simple HTTP requests.

Finally, ethics have been in the limelight. For example, residential proxy providers are looking for ways to inform and reward participants of the network. We ourselves took charge of building the framework for ethical acquisition, which, I believe, has played a part in the fact that there are less shady practices and more clarity among all industry participants.

To answer the second question, Oxylabs have reacted to these changes with the development of Scraper APIs. We created both dedicated and universal scrapers that can acquire publicly available data from nearly any website without issue. Additionally, all of our proxies are ethically sourced, giving our partners the much needed peace of mind when engaging in scraping.

Have you seen or noticed any particular trends in data acquisition or web scraping? Are specific data types becoming popular?

Off the cuff I’d say that the use of ecommerce and delivery data has been booming since the pandemic hit. Businesses want to (legally) spy on competitors and gain access to as much data as possible. Data types like pricing, products or delivery times are important to any competitor.

But these have always been important. Maybe I would say that external data in general has risen in importance. Outside of that, I don’t think there have been any particular trends in data types. There have been, however, changes in the entire supply chain. As I’ve mentioned, businesses only really need the data. Even then, the data is not the key – insights are.

As such, businesses at the tail-end of the chain have proliferated in recent years. Data-as-a-service aggregators, ones that collect information and sell sets of it, have been rising in popularity.

There are also some businesses that provide insights directly. While these are still few and far between, some of them have unique value propositions that I could see as worthwhile. Jungle Scout, for example, is a service that both scrapes external data and has large datasets from internal sources. As such, they can provide insights other businesses can’t.

What do you think are the biggest challenges the industry is facing currently? Are there any innovative solutions to these or other challenges on the horizon?

Bot protection has always been the greatest challenge. Scraping, you see, is a cat-and-mouse game. Websites attempt to implement anti-bot measures, such as the well-known CAPTCHA, while scraping companies attempt to continue evading them to retain access to data.

There have been great strides made in bot protection. TLS (Transport Layer Security) fingerprinting has been one such improvement. Sophisticated websites can use initial network handshakes to match them with headers. As many scraping tools manually modify the headers sent, TLS can often be mismatched, which would be a dead giveaway.

On the other hand, the deck is always slightly stacked in the favor of scraping. Most anti-bot protection features put a dent in the overall user experience. Filling in a CAPTCHA is something that detracts from that frictionless experience of the modern web we’re used to.

Some businesses use these techniques and see no issue. Others, ones highly concerned with delivering the best user experience possible, avoid using CAPTCHAs unless absolutely necessary. It’s always a tradeoff. More bot protection equals, almost always, worse UX, which leads to less revenue. But then less people are scraping your website.

Additionally, new pages with interesting data and content appear all the time. And you don’t start building a website from bot protection. It has to be functional first. So, the process of scraping is a lot easier than it could be for a long time.

Would you say that there are potential benefits in web scraping for academic research or policy-making? If so, why hasn’t the scientific or political community adopted the practice?

Academic research, quantitative in particular, is in large part based on data that doesn’t exist on the internet, yet. There could be studies, however, on internet behavior or something of the like where scraping could be immensely useful. Additionally, I think we’re not seeing such widespread adoption due to the previously mentioned barrier to entry.

Let’s imagine that there’s no previous scraping experience in some particular university. The researcher would have to build everything from the ground up, get all the deep knowledge, and the funding required just to start acquiring the data.

It doesn’t help that the research areas that benefit the most from scraping (like sociology, economics, psychology, etc.) are far removed from the coding, development, and IT in general. I think it’s more of an unfortunate, but temporary, circumstance, because web scraping providers will be able to reduce the barrier by a significant margin in the future.

When it comes to policy-making, I’m not so sure. I think that rather than making, it should be about enforcing. Governments are definitely knee-deep in web scraping for all kinds of security purposes. Businesses, on the other hand, have been using the same processes to protect themselves from counterfeits and copyright infringement. There’s an entire business vertical dedicated explicitly to brand protection.

 

spot_img

Explore more