First and foremost, scraping the web (in PHP) is bad, mkay? PHP web scraping isn’t worse than doing it with other languages, it’s just that web scraping in general is most likely to be looked by with disdain by content-producers. And it’ll make your code more brittle than it should be, and is general going to make an application more complex to build.
That said, sometimes “web scraping” is your only choice. If you love PHP (I do), and need to do some web scraping, you’re in the right place. In this article I’ll give a quick summary of the state-of-the-art for PHP web scraping, and some details about why to do it and what tools to use. Let’s start this PHP web scraping tutorial!
What is Web Scraping?
Web scraping with PHP is no different than any other kind of “web scraping.” And while different people mean different things when they say “web scraping,” what I mean is that you’re extracting information from within the HTML of a web page the owner of that information hasn’t made that information available in a REST, SOAP, or GraphQL API. (Nor any other kind of identifiable programming-friendly interface.)
I was recently looking to do some web scraping to get information about our web affiliate businesses for WPShout. (BTW, if you’re looking for the best WordPress web hosting, read our great article on the topic.) The owner of the affiliate program data didn’t offer any notable API. But I wanted us to have an “earning dashboard.” So I built a PHP scraper script to fetch it for me. And it’s working great. (Unfortunately I can’t show it to you, as I don’t want to share the data it scrapes.)
The Reasons to do Web Scraping
So we web scrape because…
- We want data inside of our PHP script
- The owner of the data doesn’t expose an API by which we can more efficiently get that data
- We really want their data inside of our PHP script
I said “we want that data” twice because a characteristic of web scrapper scripts is that they’re fragile. Because they’re getting at underlying data presented in a web page’s internal HTML, they can break for random reasons. Like a designer changing the HTML that surrounds the data you’re seeking. This is why Intuit (makers of Mint, QuickBooks, etc) are spending millions on its bank web-scrappers every year. (Conservatively. But without citation 🙃)
The Reasons Not to Do Web Scraping
Here are few of the reasons that your web scraper (once it’s working, which is a whole other topic…), will break:
- The data format is now presented different in terms of it’s text-strings
- That the data location on the page is moved because of design considerations
- The host’s data format (surrounding HTML) is moved because of design considerations
- The data you’re trying to scrape gets intentionally obfuscated by its host (think of Facebook’s anti-ad-block markup)
In short, web scraping should always be the last resort. You’re doing a thing that the content-producer is at best a little disappointed with. If they wanted to make that information available to you, and they could, they would have. They may truly not have the technical capacity, or interest. Which is when web scraping is a great fit. Because a slow-moving website is one of the best targets for scraping data from websites using PHP.
Why Use PHP for Web Scraping?
There are a number of PHP web scraping libraries. And while I’ve not done an exhaustive search, I do suspect there are better languages than PHP to use for scraping. I doubt the absolutely best web scraping framework is writting in PHP. No PHP web scraping framework I know is mind-blowingly good.
The primary reason for doing PHP web scraping is that you know and love PHP. Use PHP for your web scraping if the rest of your application (that’s going to use the result of this web scraping) is written in PHP. Scraping with PHP is not so easy that I’d plan to use it in the middle of Python web project, for example. The PHP scraping libraries are quite good, but they’re not amazing.
Reasons to Avoid PHP Web Scraping
Web scraping with PHP is easy enough. And good enough that I’d do it without a second’s hesitation in a PHP project. So the primary reason I wouldn’t do PHP scraping? That I knew a different language better, or was already using it. Web scraping with PHP is not better enough that I’d use it in preference to some language like Java that I was already writing my project in.
The other big reason not to do PHP web scraping is simply that you’re not wanting to do web scraping at all. There are tons of good reasons for that, including the increasing commonness of CAPTCHAs and other bot-stopping maneuvers. It’s still a useful technique to know for sure, but it’s getting less-useful than it was a decade ago.
Getting Started with PHP Web Scraping
There a number of PHP web scraping framework options. While I could make this tutorial a thorough tour of using each one of those, I think that the Symfony projects of Goutte and Panther make a potent combination here, and I wouldn’t really make an effort to use a different system. You can if you need to, but I won’t give you a full list.
Which PHP Web Scraping Libraries Should I Use?
So, I think the obvious answer here is “whatever you like.” No PHP scraping framework I’ve ever tried is so good that I’d use it in preference to another.
I started doing some light PHP web scraping in the context of a project that was using the Symfony PHP web framework. And, in general, I enjoy the Symfony tools enough to not look for others. So what we’ll cover in the rest of the PHP web scraping tutorial is FriendsOfSymfony/Goutte and Symfony/Panther. But there are a lot of good options. In general the major difference I’d highlight is between a PHP web scraping library like Panther or Goutte, and PHP web request library like cURL, Guzzle, Requests, etc.
In my mind, a PHP web request library is distinguished from a web scraping library because:
- It can make requests using all the major HTTP methods
- It can get you the basic HTML of a page, which you can parse how you’d like
- It doesn’t help you parse the web page your HTTP request returns
- To doesn’t help you to make a series of requests in sequence while moving through a series of web pages you’re trying to scrape
So I’d count Goutte, Panther, and Laravel Dusk (which we’ll just briefly cover at the end) to be properly PHP web scraping libraries. I’d count just about every other PHP tool I’ve ever heard of as a “request library.”
Getting Started with Symfony Goutte
So, let’s get to the step-by-step of our PHP Web Scraping tutorial. Goutte was the first PHP web scraper I used, and it still works pretty well for all the basic needs you’ll have: getting pages, filling in their PHP web forms, and extracting content from them.
To use Goutte, we must first get it:
composer require fabpot/goutte
Here’s a script that will scrape a page with Goutte:
<?php include ('vendor/autoload.php'); $client = new \Goutte\Client(); $crawler = $client->request('GET', 'http://example.com/'); $fullPageHtml = $crawler->html(); $pageH1 = $crawler->filter('h1')->text();
This is using the PHP package manager, Composer. So will all other examples here. I don’t yet have an article to get you started with Composer, let me know if you need one.
What this crawler does it pretty simple: it goes to
example.com and loads the page. Then it filters the HTML and pulls the page’s
<h1> element, getting us its content. There is nothing very cool here, but should give you a sense of how Goutte works for PHP web scraping.
When You’ll Need Symfony Panther
Here’s our PHP scraper script that will browse and move-through a page with Panther:
First, get the package:
composer require symfony/panther
Then use this in your PHP script:
<?php include ('vendor/autoload.php'); $client = \Symfony\Component\Panther\Client::createChromeClient(); $crawler = $client->request('GET', 'http://example.com/'); $fullPageHtml = $crawler->html(); $pageH1 = $crawler->filter('h1')->text();
This is, you’ll notice, identical to the above Goutte code. We’ll highlight some of the cooler features of Panther in the next snippet. At this point we’re just replacing Goutte with Panther, and getting the same page content. If you really execute both of these, you’ll notice that Pather is way slower than Goutte. Spinning up Chrome is way more expensive than just getting HTML with PHP itself, which is all Goutte is doing under the hood. Not the end of the world, but certainly something to know.
Random useful tip for Symfony Panther play: I’ve had a lot of times when my development scripts would error and leave a Chrome running on port 9515. The will later raise an exception reporting the issue. The BASH command
kill $(lsof -t -i:9515) is the best way I found to kill that and get my script back to working.
Doing More Complex Operations with Goutte and Panther
The parts of web scraping I struggle with the most is the operations other than loading a web page and getting some data that was in the underlying HTML. Things like filling forms, finding and clicking links, and more are possible but not as obvious. I actually find them a little awkward with Goutte and Panther. But to get you started, here’s an example of filling in the search value on Wikipedia and clicking the search button:
include ('../vendor/autoload.php'); $client = new \Goutte\Client(); // For Panther //$client = \Symfony\Component\Panther\Client::createChromeClient(); $crawler = $client->request('GET', 'https://www.wikipedia.org/'); $form = $crawler->filter('#search-form') ->form(['search' => 'web scraping']); $crawler = $client->submit($form); // For Panther //$client->takeScreenshot('screenshot.png'); //$client->waitFor('.firstHeading'); echo $crawler->filter('.mw-parser-output p')->first()->text();
What’s awesome about this example of a web scraping script that you’re able to adapt it to just about any form you can think of. What’s not great is while playing with the above snippet I wasted about 90 minutes because I forget the second assignment to
$crawler from the form submission and was getting obscure errors when running with Symfony Panther.
What does the above do? It searches Wikipedia for “web scraping” by finding the search form on the homepage and submitting it. (Because of the relatively smart structure of Wikipedia URLs, you might not need this part. Because lots of other things you’ll want to scrape require you to fill out a form, I intentionally did this search via filling out the form rather than “URL hacking” where I just took the best-guess of the structure of the final URL.)
It’s also really fun about Panther and Goutte that when you don’t do stupid things, they’re APIs are largely compatible. Because Goutte is basically an HTML-only browser, it can’t do cool things like take-screenshot, nor wait for DOM elements to load. But other than that, these two will work the same. But the screenshot feature? It is certainly pretty cool. 🤓
Laravel Fan? Laravel Dusk Looks a Lot Like Panther, with Slightly Nicer Interactions
As I mentioned, I’ve not done a comprehensive review of PHP web scraping frameworks. But it did occur to me as I wrote my above minor complaint about Symfony Panther’s heuristics for navigation that there was something called Laravel Dusk that I’d not really studied.
Just like Symfony Panther, Laravel Dusk is meant primarily as a tool for you test your own web application, and not scrape web sites you don’t own. But just the same, it also supports the idea of doing whatever you want with a PHP scraping tools.
There’s a Lot More To Do with PHP Web Scrapers
Web scraping with PHP is really limited by your imagination. And complex human tests like reCAPTCHAs. But other thank that, I like how this web scraping tutorial finished. PHP is a powerful language, and understanding how you can use it to harvest data from the web at large is well worth the effort.
Go forth and act responsibly, gathering data that the owner is hopefully OK with your web-scraping.