Please enable JavaScript.
Coggle requires JavaScript to display documents.
Python (Source (Book (Automate the Boring Stuff with Python https…
Python
Source
Book
Automate the Boring Stuff with Python
https://automatetheboringstuff.com/2e/
Python website
https://www.python.org/
App
Interactive shell
Also called
REPL
Read-Evaluate-Print Loop
Code
Aspect
Expression
values
1,2,3,4
operators
**
Exponent
2**3
8
%
Modulus/
Remainder
22%8
6
22-16=6
//
Integer division/ Floored quotient
22//8
2
*, /, +, -
Math in Python
Precedence
the condition of being considered more important than someone or something else; priority in importance, order, or rank.
Data types
Integers
-2, -1, 0, 1, 2, 3, 4, 5
Floating-point numbers
-1.25, -1.0, -0.5, 0.0, 0.5, 1.0, 1.25
Strings
'a', 'aa', 'aaa', 'Hello!', '11 cats'
String Concatenation and Replication
-- >>> 'Alice' + 'Bob'
'AliceBob'
-- >>> 'Alice' + 42
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
'Alice' + 42
TypeError: can only concatenate str (not "int") to str
-- >>> 'Alice' * 5
'AliceAliceAliceAliceAlice'
-- >>> 'Alice'
'Bob'
-- >>> 'Alice'
5.0
TypeError: can't multiply sequence by non-int of type 'str'
TypeError: can't multiply sequence by non-int of type 'float'
Function
Valuable
List
X = [1,2,3,4]
X = [1,2,3,4]
'>>>' Print(x[1:2})
[2,3]
'>>>' x[1:2] = []
[1,4]
'>>>' x[0] = 99
[99,2,3,4]
'>>>' x = x+[7,8]
[1,2,3,4,7,8]
'>>>' Print(len(x)
4
data = [[1,2,3],[4,5,6]]
'>>>' Print(data[0] [1])
2
'>>>' Print(data[1] [0:2])
[4,5]
'>>>' data[1] [0:1] = [5,5,5]
[1,2,3], [5,5,5,5,,6]
Dictionary
集合
s1 = {1,2,3,4}
s2 = {3,4,5,6}
'>>>' Print(3 in s1)
True
'>>>' s3 = s1&s2
'>>>'Print(s3)
{3,4}
'>>>' s3 = s1|s2
'>>>'Print(s3)
{1,2,3,4,5,6}
'>>>' s3 = s1-s2
'>>>'Print(s3)
{1,2}
'>>>' s3 = s1-s2
'>>>'Print(s3)
{1,2,5,6}
'>>>' s = set('World')
{'o', 'd', 'r', 'W', 'l'}
dic = {'red':'apple', 'blue':'sky'}
'>>>' dic = {''red} = {'Little apple'}
'>>>' Print(dic['red'])
Little apple
Tuple
X = (1,2,3,4,5)
X = (1,2,3,4)
'>>>' x [0] = 5
Error
Bs4
Get <a href> url
hrefs=Book_url.find_all('a')
hrefs[1]['href']
bs4.BeautifulSoup(response.text, 'html.parser')
Analyze Url
Requests
response=requests.get(Url, headers={
'cookie':'cf_clearance=3a941f898a9a963c93a2ecb29e6e3b6bda6cc1cb-1586582915-0-150',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36'
})