'Scaling traffic density in SUMO
I'm using SUMO for (indeed) simulating traffic in a large-scale environment. I would like to simulate different traffic densities and I noticed that the "--scale" option allows scaling the amount of traffic. However, this value and how it affects the simulation is still obscure. What is the bound of this value? How precisely traffic is scaled? Is there a precise explanation of how this works?
thanks :)
Solution 1:[1]
The --scale option really scales the whole demand by the factor. So if the factor is 2 whenever a vehicle is going to be inserted by the input definitions two vehicles will be inserted instead. For values < 1 it gets interpreted a s a probability so --scale 0.5 means a 50% chance that the vehicle will be skipped. In theory there is no limit to the value you give here but if the street is full no further vehicles will be inserted immediately and they pile up in the insertion buffer instead. This means they get inserted later when there is sufficient space.
Solution 2:[2]
this was an interesting one, see script below which downloads all the raw data. I've pulled out the key pieces of data as requested but you may want to look at the raw data yourself as there is so much there.
One thing to note is that the only 'PGA_CM/S^2' data I could find is the value in bold on the "Go" page for each record.
import requests
import pandas as pd
url = 'https://esm-db.eu/esm_next_ws/jsonrpc'
payload = '{"jsonrpc":"2.0","method":"armonia","id":"8","params":{"map":{"_page":"DYNA_X_event_waveform_band_instrument_D","_state":"find","_action_json_rpc_list":"1","_rows_per_page":"10000","internal_event_id":"IT-2012-0008","_operator_internal_event_id":"=","_order_field_0":"epi_dist","_order_direction_0":"asc","_token":"NULLNULLNULLNULL"}}}'
con_len = len(payload)
headers= {
'Accept':'application/json, text/plain, */*',
'Accept-Encoding':'gzip, deflate, br',
'Content-Length':str(con_len),
'Content-Type':'application/x-www-form-urlencoded',
'Host':'esm-db.eu',
'Origin':'https://esm-db.eu',
'Referer':'https://esm-db.eu/',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36'
}
print('Fetching data...')
data = requests.post(url,headers=headers,data=payload).json()
final = []
for row in data['result']['rows']:
#loads of data, might be worth downloading json (data variable) and seeing what else you want
item = {
'Network':row[39],
'Station_Code':row[22],
'Station_Name':row[46][10],
'Station_Latitude_Degree':row[46][12],
'Station_Longitude_Degree':row[46][3],
'PGA_CM/S^2':row[18], #can only find the value in bold on the "Go" page
'Date':row[24].replace('T',' '),
}
final.append(item)
df = pd.DataFrame(final)
df.to_csv('earthquakedata.csv',index=False)
print('Saved to earthquake.csv')
if you want the whole load of data (it's almost unmanageable in csv) then you could dump it all into csv by changing the last few rows to this:
print('Fetching data...')
data = requests.post(url,headers=headers,data=payload).json()
df = pd.DataFrame(data['result']['rows'])
df.to_csv('earthquakedata_ugly.csv',index=False)
print('Saved to earthquake_ugly.csv')
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Michael |
| Solution 2 | bushcat69 |
