Preface
There are 2 ways to log in to the site:
- Enter your login and password in the required fields, and click the“Login”button
- Set yourself the cookies you need.
The first way, no matter how simple it is, will not suit us, because on most sites, there is a limit on the number of authorizations per day.
The second one remains, it works as follows. When you log in to the site, a special key is stored in the cookie. After that, whenever you go to the page, you do not need to log in, because the site knows that you are already logged in, knows because it has read your key from the cookie.
Our task is to save cookies after authorization, and then download them when you log in to the site.
To work with cookies, we will need the pickle library, to install it, type at the command line:
pip install pickle
Purpose.
In this tutorial, we will configure authorization on the scrap.tf website.
Let’s store cookies
Create a Python script, and put the following code.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec
import pickle
def get_webdriver():
option = webdriver. FirefoxOptions()
option.set_preference('dom.webdriver.enabled',False)
driver = webdriver. Firefox(options=option)
return driver
driver = get_webdriver()
driver.get('https://scrap.tf/')
driver.find_element_by_css_selector('img.sits-login').click()
wait = WebDriverWait(driver, 9999999999)
wait.until(ec.title_contains('Scrap.TF'))
with open('scrap_tf.pkl', 'wb') as f:
pickle.dump(driver.get_cookies(), f)
driver.quit()
Let’s analyze in order.
import pickle
The pickle library will be responsible for working with cookies.
driver.get('https://scrap.tf/')
Go to the scrap.tf website
Cookies are stored for the url from which you logged in. For example, if you have logged in with the url https://scrap.tf/raffles, then you need to download cookies from this page.
driver.find_element_by_css_selector('img.sits-login').click()
Click on the authorization button.
wait = WebDriverWait(driver, 9999999999)
wait.until(ec.title_contains('Scrap.TF'))
Then we wait 9999999999 seconds, the presence of the substring “Scrap.TF” in the page title(ec.title_contains).
with open('scrap_tf.pkl', 'wb') as f:
pickle.dump(driver.get_cookies(), f)
Here, we open the file scrap_tf.pkl, in the byte write mode “wb“, the file is stored in the variable “f“. (save to a file).
The pickle.dump method writes browser cookies “driver.get_cookies()” to the file “f“.
Since this is a block with, the file does not need to be closed, because it will close itself when you exit the block.
driver.quit()
At the end, close the browser.
As a result, after authorization, you will have a file scrap_tf.pkl in the folder with the script.
Download cookies.
Create the 2nd Python script:
from selenium import webdriver
import pickle
def get_webdriver():
option = webdriver. FirefoxOptions()
option.set_preference('dom.webdriver.enabled',False)
driver = webdriver. Firefox(options=option)
return driver
driver = get_webdriver()
driver.get('https://scrap.tf')
cookies = pickle.load(open("scrap_tf.pkl", "rb"))
for cookie in cookies:
driver.add_cookie(cookie)
In the last script we saved cookies, in this we load.
driver.get('https://scrap.tf')
We go to the https://scrap.tf, because it is for this url that we have stored cookies.
cookies = pickle.load(open("scrap_tf.pkl", "rb"))
Using pickle.load, we get an array of values from the file scrap_tf.pkl.
for cookie in cookies:
driver.add_cookie(cookie)
And add these cookies to your browser.
Now, you are authorized, so you can automatically buy hats, participate in giveaways, get money from scrap.tf according to this instruction etc.