'Python Selenium: How can I deal with results like [<selenium.webdriver.remote.webelement.WebElement (session.....)>]

I'm sorry for my poor English but it would be very helpful if someone knows good solution for this.

The code worked right but result is like this [<selenium.webdriver.remote.webelement.WebElement (session="78acd33389c3e5a8d28ce772e71ccece", element="8532ae3f-c3d9-4daa-8669-f3fd23c892f9")>]

How can I take text in picture. For example, the text I want is [2002/04 に設立] as I show in attached picture.enter image description here

from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import NoSuchElementException
import time
import pandas as pd
from time import sleep

options = Options()
browser = webdriver.Chrome('path',options=options)

pageURL = 'https://www.wantedly.com/projects?type=popular&page=2&occupation_types%5B%5D=jp__engineering&hiring_types%5B%5D=mid_career&hiring_types%5B%5D=newgrad&hiring_types%5B%5D=internship&hiring_types%5B%5D=side_job&hiring_types%5B%5D=freelance'
browser.get(pageURL)
sleep(3)

elem_urls=[]

i = 0

while i < 1: #https://note.nkmk.me/python-while-usage/
    elems = browser.find_elements_by_css_selector(".project-bottom .company-name h3 a")
 
    for elem in elems:
        elem_urls.append(elem.get_attribute("href"))
 
     #次へをクリックしページ遷移する
    try:
        next_button = browser.find_element_by_class_name("icon-angle-right")
        next_button.click()
        sleep(3)
    except Exception:
        
        break
    i += 1
        
print('ページ数:', len(elem_urls))

cols=['住所']
df = pd.DataFrame(index=[], columns=cols)
 
for i in elem_urls:
    browser.get(i)
    
    #comp_tit = browser.find_element_by_css_selector(".BasicInfoSection__LeftCol-sc-kk2ai9-1 .BasicInfoSection__ListItem-sc-kk2ai9-9 .BasicInfoSection__CompanyInfoDescription-sc-kk2ai9-12")
    #comp_tit = browser.find_element_by_class_name("BasicInfoSection__CompanyInfoDescription-sc-kk2ai9-12")
    comp_tit = browser.find_element_by_xpath("//*[@id='basic_info']/div/div/div[1]/li[1]/div")
    
    df = df.append({'住所':comp_tit}, ignore_index=True)
    df.head(10)

df.to_csv("output_pd.csv", encoding="UTF-8")



Solution 1:[1]

.find_element method returns web element, so

comp_tit = browser.find_element_by_xpath("//*[@id='basic_info']/div/div/div[1]/li[1]/div")

will give you a web element object.
In case you want to get that element text value you can extract it with .text method.
So your code should be

comp_tit = browser.find_element_by_xpath("//*[@id='basic_info']/div/div/div[1]/li[1]/div").text

there

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Prophet