Here we have tried to collect information on how to use proxies with Selenium.

1. Selenium integration with Proxies

We have some Python integration guides on how to use Selenium with our proxies. However, we do not provide documentation for all languages - please search documentation in Google.

Python docs are available After you create a package with us - inside package details.

2. Selenium can be detected by websites.

There are various ways how Selenium can be detected by target websites and therefore block your scraping activity.

Read the following Stackoverflow thread to understand ways in which websites try to detect Selenium/WebDriver presence.

https://stackoverflow.com/questions/33225947/can-a-website-detect-when-you-are-using-selenium-with-chromedriver

Because of this, we recommend using "HARDENED SELENIUM" provided by Multilogin in their "Automation S" package which currently costs $200/month.

Here is the link to it: https://multilogin.com/pricing-purchase/

It does seem like this way you can avoid Selenium being detected by websites but there could be more roadblocks implemented by websites as we speak.

Selenium Proxy

3. Use IP address authenticated proxies

It is best to use Selenium with "IP Auth" proxies, not with Login:Password authentication as the mechanism of implementing the Login/Password auth method in Selenium is quite complex. You can easily set your IP address for IP authentication inside our proxy packages.

4. Always randomize User Agent and Browser Profiles.

As outlined above, it's best to randomize User Agents AND Browser Profiles as you perform scraping. This can be done again using Multilogin with Selenium or using any other Selenium plugins at least for User Agent randomization.

Did this answer your question?