'Having Trouble Clicking in Date Field with Selenium
I'm trying to scrape a table from the 1/30/2022 slate. However, I get the 'unable to locate element' error when I attempt to click in the date field and change the date from 2/6 to 1/30. I've tried finding by class name as well. Is there another way to do this, or is there something I'm doing wrong?
from ast import Return
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
import time
path = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(path)
driver.get('https://rotogrinders.com/resultsdb/nfl')
time.sleep(5)
driver.maximize_window()
time.sleep(10)
search = driver.find_element_by_xpath('//*[@id="navbar-demo1-mobile"]/div[1]/div/span/div')
search.click()
previous = driver.find_element_by_class_name('react-datepicker__navigation react-datepicker__navigation--previous')
previous.click()
time.sleep(5)
date = driver.find_element_by_class_name('react-datepicker__day react-datepicker__day--030
react-datepicker__day--weekend') date.click()
Solution 1:[1]
wait=WebDriverWait(driver,60)
driver.get('https://rotogrinders.com/resultsdb/nfl')
wait.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe")))
date = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'.react-datepicker__input-container input')))
date.send_keys("01/16/2022")
First wait for the iframe and then proceed to click the search element and then send keys.
Import:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Solution 2:[2]
It might be possible to avoid Selenium here. It's just a matter of pulling out some id's to feed into the direct url.
import requests
import datetime
import pandas as pd
dateStr = input('Enter date (YYYY-MM-DD): ')
dateStr_alpha = datetime.datetime.strptime(dateStr, '%Y-%M-%d').strftime('%Y%M%d')
url = f'https://service.fantasylabs.com/contest-sources/?sport_id=1&date={dateStr}'
jsonData = requests.get(url).json()
groupId = jsonData['contest-sources'][0]['draft_groups'][0]['id']
url = f'https://service.fantasylabs.com/live-contests/?sport=NFL&contest_group_id={groupId}'
jsonData = requests.get(url).json()
tables = {}
for each in jsonData['live_contests']:
contestId = each['contest_id']
if each['contest_name'] not in tables.keys():
tables[each['contest_name']] = {}
url = f'https://dh5nxc6yx3kwy.cloudfront.net/contests/nfl/{dateStr_alpha}/{contestId}/data/'
jsonData = requests.get(url).json()
contestUsers = pd.DataFrame(jsonData['users']).T.reset_index(drop=True)
tables[each['contest_name']]['users'] = contestUsers
fieldExposures = pd.DataFrame(jsonData['players']).T
for k, v in jsonData['exposures'].items():
exposureDf = pd.DataFrame(v['exposureCounts']).T
exposureDf.columns = [x + f'_top_{k}%' for x in exposureDf.columns]
fieldExposures = pd.merge(fieldExposures, exposureDf, how='left', left_index=True, right_index=True )
fieldExposures = fieldExposures.fillna(0).reset_index(drop=True)
tables[each['contest_name']]['exposures'] = fieldExposures
print('****** ' + each['contest_name'] + ' ******')
print(contestUsers,fieldExposures )
Output:
Now just call the table by its contest name:
print(tables['NFL $100K Conference Special [$20K to 1st]'])
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Arundeep Chohan |
| Solution 2 |

