Поиск по блогу

среда, 22 октября 2014 г.

Исправляем код "Random proxy middleware for Scrapy" и открываем способ поиска в форках

Здесь в качестве упражнения разбираем хрестоматийный пример Random proxy middleware for Scrapy и находим ошибки в устаревшем коде. Результат - рабочий код... и и десяток ссылок для его рефаеторинга. Сложный поиск на GitHub не нашел в форках ... того, что удалось найти по запросу process_exception(self, request, exception, spider) proxy

Список прокси из файла загружается в словарь, из которого случайным образом выбирается конкретный прокси, который передается в request.meta['proxy'] каждого запроса. Если запрос после 10 повторений не получает ответа, то адрес прокси-сервера удаляется из словаря. В случае, если if 'proxy' in request.meta уже заполнено (прокси задан системно или в спайдере\краулере), то он не переприсваивается, таким образом, можно комбинировать middleware (и в DOWNLOADER_MIDDLEWARES)

Файл проверен на w8 (Windows)

In [83]:
#Вот результат работы
%load "C:\\Users\\kiss\\Documents\\GitHub\\dirbot_se1\\dirbot\\randomproxy.py"
In []:
import re
import random
import base64
from scrapy import log
# I had to add this for windows 
# to open proxy_list = "C:/Users/kiss/Documents/GitHub/dirbot_se1/dirbot/list.txt"
impotr os 
#import pdb

class RandomProxy(object):
    def __init__(self, settings):
        self.proxy_list = settings.get('PROXY_LIST')
        fin = open(self.proxy_list)

        self.proxies = {}
        for line in fin.readlines():
            parts = re.match('(\w+://)(\w+:\w+@)?(.+)', line)
            pdb.set_trace()
            # Cut trailing @
   if parts.group(2): # if there are usern:password@
    user_pass = parts.group(2)[:-1]
   else:
    user_pass = ''

   self.proxies[parts.group(1) + parts.group(3)] = user_pass

        fin.close()

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings)

    def process_request(self, request, spider):
        # Don't overwrite with a random one (server-side state for IP)
        if 'proxy' in request.meta:
            return

        proxy_address = random.choice(self.proxies.keys())
        proxy_user_pass = self.proxies[proxy_address]

        request.meta['proxy'] = proxy_address
        if proxy_user_pass:
            basic_auth = 'Basic ' + base64.encodestring(proxy_user_pass)
            request.headers['Proxy-Authorization'] = basic_auth

    def process_exception(self, request, exception, spider):
        proxy = request.meta['proxy']
        log.msg('Removing failed proxy <%s>, %d proxies left' % (
                    proxy, len(self.proxies)))
        try:
            self.proxies.pop(proxy)
        except ValueError:
            pass

Далее записан процесс изучения оригинала %load "C:\Users\kiss\Documents\GitHub\dirbot_se1\dirbot\randomproxy.py"

In [1]:
# Вот оригинал (то, что было)
%load "C:\\Users\\kiss\\Documents\\GitHub\\dirbot_se1\\dirbot\\randomproxy_bak.py"
In []:
import re
import random
import base64
from scrapy import log
# debugger
import pdb

class RandomProxy(object):
    def __init__(self, settings):
        self.proxy_list = settings.get('PROXY_LIST')
        fin = open(self.proxy_list)

        self.proxies = {}
        for line in fin.readlines():
            parts = re.match('(\w+://)(\w+:\w+@)?(.+)', line)
            pdb.set_trace() # This is after my previous attemt to debug this
            # Cut trailing @
            if parts[1]:
                parts[1] = parts[1][:-1]

            self.proxies[parts[0] + parts[2]] = parts[1]

        fin.close()

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler.settings)

    def process_request(self, request, spider):
        # Don't overwrite with a random one (server-side state for IP)
        if 'proxy' in request.meta:
            return

        proxy_address = random.choice(self.proxies.keys())
        proxy_user_pass = self.proxies[proxy_address]

        request.meta['proxy'] = proxy_address
        if proxy_user_pass:
            basic_auth = 'Basic ' + base64.encodestring(proxy_user_pass)
            request.headers['Proxy-Authorization'] = basic_auth

    def process_exception(self, request, exception, spider):
        proxy = request.meta['proxy']
        log.msg('Removing failed proxy <%s>, %d proxies left' % (
                    proxy, len(self.proxies)))
        try:
            del self.proxies[proxy]
        except ValueError:
            pass

Конструктор класса открывает текстовый файл, считывает строки и производит над ними непонятные манипуляции

Открываем файл PROXY_LIST, считываем строки, заполняем словарь self.proxies... А что там и как выбирается, сейчас посмотрим

In [2]:
import re
In [20]:
str_1 ='http://127.0.0.1:8080'

Теперь попробуем регулярное выражение из класса

In [21]:
parts_e = re.match('(\w+://)(\w+:\w+@)?(.+)', str_1)

В кавычках просмтриваются три группы

In []:
\w       Matches any alphanumeric character; equivalent to [a-zA-Z0-9_]
"+"      Matches 1 or more (greedy) repetitions of the preceding RE

(\w+://) - just part of string ( http:// or ftp://)
    
"?"      Matches 0 or 1 (greedy) of the preceding RE.
        *?,+?,?? Non-greedy versions of the previous three special characters.
"."      Matches any character except a newline

(\w+:\w+@)? - username:passw@ or 0

(.+) - any string with length not less than 1 characters
In [11]:
parts_e.groups()
Out[11]:
('http://', None, '127.0.0.1:8080')
In [5]:
help(re)
Help on module re:

NAME
    re - Support for regular expressions (RE).

FILE
    c:\users\kiss\anaconda\lib\re.py

DESCRIPTION
    This module provides regular expression matching operations similar to
    those found in Perl.  It supports both 8-bit and Unicode strings; both
    the pattern and the strings being processed can contain null bytes and
    characters outside the US ASCII range.
    
    Regular expressions can contain both special and ordinary characters.
    Most ordinary characters, like "A", "a", or "0", are the simplest
    regular expressions; they simply match themselves.  You can
    concatenate ordinary characters, so last matches the string 'last'.
    
    The special characters are:
        "."      Matches any character except a newline.
        "^"      Matches the start of the string.
        "$"      Matches the end of the string or just before the newline at
                 the end of the string.
        "*"      Matches 0 or more (greedy) repetitions of the preceding RE.
                 Greedy means that it will match as many repetitions as possible.
        "+"      Matches 1 or more (greedy) repetitions of the preceding RE.
        "?"      Matches 0 or 1 (greedy) of the preceding RE.
        *?,+?,?? Non-greedy versions of the previous three special characters.
        {m,n}    Matches from m to n repetitions of the preceding RE.
        {m,n}?   Non-greedy version of the above.
        "\\"     Either escapes special characters or signals a special sequence.
        []       Indicates a set of characters.
                 A "^" as the first character indicates a complementing set.
        "|"      A|B, creates an RE that will match either A or B.
        (...)    Matches the RE inside the parentheses.
                 The contents can be retrieved or matched later in the string.
        (?iLmsux) Set the I, L, M, S, U, or X flag for the RE (see below).
        (?:...)  Non-grouping version of regular parentheses.
        (?P<name>...) The substring matched by the group is accessible by name.
        (?P=name)     Matches the text matched earlier by the group named name.
        (?#...)  A comment; ignored.
        (?=...)  Matches if ... matches next, but doesn't consume the string.
        (?!...)  Matches if ... doesn't match next.
        (?<=...) Matches if preceded by ... (must be fixed length).
        (?<!...) Matches if not preceded by ... (must be fixed length).
        (?(id/name)yes|no) Matches yes pattern if the group with id/name matched,
                           the (optional) no pattern otherwise.
    
    The special sequences consist of "\\" and a character from the list
    below.  If the ordinary character is not on the list, then the
    resulting RE will match the second character.
        \number  Matches the contents of the group of the same number.
        \A       Matches only at the start of the string.
        \Z       Matches only at the end of the string.
        \b       Matches the empty string, but only at the start or end of a word.
        \B       Matches the empty string, but not at the start or end of a word.
        \d       Matches any decimal digit; equivalent to the set [0-9].
        \D       Matches any non-digit character; equivalent to the set [^0-9].
        \s       Matches any whitespace character; equivalent to [ \t\n\r\f\v].
        \S       Matches any non-whitespace character; equiv. to [^ \t\n\r\f\v].
        \w       Matches any alphanumeric character; equivalent to [a-zA-Z0-9_].
                 With LOCALE, it will match the set [0-9_] plus characters defined
                 as letters for the current locale.
        \W       Matches the complement of \w.
        \\       Matches a literal backslash.
    
    This module exports the following functions:
        match    Match a regular expression pattern to the beginning of a string.
        search   Search a string for the presence of a pattern.
        sub      Substitute occurrences of a pattern found in a string.
        subn     Same as sub, but also return the number of substitutions made.
        split    Split a string by the occurrences of a pattern.
        findall  Find all occurrences of a pattern in a string.
        finditer Return an iterator yielding a match object for each match.
        compile  Compile a pattern into a RegexObject.
        purge    Clear the regular expression cache.
        escape   Backslash all non-alphanumerics in a string.
    
    Some of the functions in this module takes flags as optional parameters:
        I  IGNORECASE  Perform case-insensitive matching.
        L  LOCALE      Make \w, \W, \b, \B, dependent on the current locale.
        M  MULTILINE   "^" matches the beginning of lines (after a newline)
                       as well as the string.
                       "$" matches the end of lines (before a newline) as well
                       as the end of the string.
        S  DOTALL      "." matches any character at all, including the newline.
        X  VERBOSE     Ignore whitespace and comments for nicer looking RE's.
        U  UNICODE     Make \w, \W, \b, \B, dependent on the Unicode locale.
    
    This module also defines an exception 'error'.

CLASSES
    exceptions.Exception(exceptions.BaseException)
        sre_constants.error
    
    class error(exceptions.Exception)
     |  Method resolution order:
     |      error
     |      exceptions.Exception
     |      exceptions.BaseException
     |      __builtin__.object
     |  
     |  Data descriptors defined here:
     |  
     |  __weakref__
     |      list of weak references to the object (if defined)
     |  
     |  ----------------------------------------------------------------------
     |  Methods inherited from exceptions.Exception:
     |  
     |  __init__(...)
     |      x.__init__(...) initializes x; see help(type(x)) for signature
     |  
     |  ----------------------------------------------------------------------
     |  Data and other attributes inherited from exceptions.Exception:
     |  
     |  __new__ = <built-in method __new__ of type object>
     |      T.__new__(S, ...) -> a new object with type S, a subtype of T
     |  
     |  ----------------------------------------------------------------------
     |  Methods inherited from exceptions.BaseException:
     |  
     |  __delattr__(...)
     |      x.__delattr__('name') <==> del x.name
     |  
     |  __getattribute__(...)
     |      x.__getattribute__('name') <==> x.name
     |  
     |  __getitem__(...)
     |      x.__getitem__(y) <==> x[y]
     |  
     |  __getslice__(...)
     |      x.__getslice__(i, j) <==> x[i:j]
     |      
     |      Use of negative indices is not supported.
     |  
     |  __reduce__(...)
     |  
     |  __repr__(...)
     |      x.__repr__() <==> repr(x)
     |  
     |  __setattr__(...)
     |      x.__setattr__('name', value) <==> x.name = value
     |  
     |  __setstate__(...)
     |  
     |  __str__(...)
     |      x.__str__() <==> str(x)
     |  
     |  __unicode__(...)
     |  
     |  ----------------------------------------------------------------------
     |  Data descriptors inherited from exceptions.BaseException:
     |  
     |  __dict__
     |  
     |  args
     |  
     |  message

FUNCTIONS
    compile(pattern, flags=0)
        Compile a regular expression pattern, returning a pattern object.
    
    escape(pattern)
        Escape all non-alphanumeric characters in pattern.
    
    findall(pattern, string, flags=0)
        Return a list of all non-overlapping matches in the string.
        
        If one or more groups are present in the pattern, return a
        list of groups; this will be a list of tuples if the pattern
        has more than one group.
        
        Empty matches are included in the result.
    
    finditer(pattern, string, flags=0)
        Return an iterator over all non-overlapping matches in the
        string.  For each match, the iterator returns a match object.
        
        Empty matches are included in the result.
    
    match(pattern, string, flags=0)
        Try to apply the pattern at the start of the string, returning
        a match object, or None if no match was found.
    
    purge()
        Clear the regular expression cache
    
    search(pattern, string, flags=0)
        Scan through string looking for a match to the pattern, returning
        a match object, or None if no match was found.
    
    split(pattern, string, maxsplit=0, flags=0)
        Split the source string by the occurrences of the pattern,
        returning a list containing the resulting substrings.
    
    sub(pattern, repl, string, count=0, flags=0)
        Return the string obtained by replacing the leftmost
        non-overlapping occurrences of the pattern in string by the
        replacement repl.  repl can be either a string or a callable;
        if a string, backslash escapes in it are processed.  If it is
        a callable, it's passed the match object and must return
        a replacement string to be used.
    
    subn(pattern, repl, string, count=0, flags=0)
        Return a 2-tuple containing (new_string, number).
        new_string is the string obtained by replacing the leftmost
        non-overlapping occurrences of the pattern in the source
        string by the replacement repl.  number is the number of
        substitutions that were made. repl can be either a string or a
        callable; if a string, backslash escapes in it are processed.
        If it is a callable, it's passed the match object and must
        return a replacement string to be used.
    
    template(pattern, flags=0)
        Compile a template pattern, returning a pattern object

DATA
    DOTALL = 16
    I = 2
    IGNORECASE = 2
    L = 4
    LOCALE = 4
    M = 8
    MULTILINE = 8
    S = 16
    U = 32
    UNICODE = 32
    VERBOSE = 64
    X = 64
    __all__ = ['match', 'search', 'sub', 'subn', 'split', 'findall', 'comp...
    __version__ = '2.2.1'

VERSION
    2.2.1



В первоисточнике устаревший код, он не рабтает..., но чтобы заменить parts_e[2] на parts_e.group(2) надо понять, что собственно, этот код делал

In [12]:
parts_e[0],parts_e[1],parts_e[2]
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-12-2b9b6ed11f84> in <module>()
----> 1 parts_e[0],parts_e[1],parts_e[2]

TypeError: '_sre.SRE_Match' object has no attribute '__getitem__'
In [17]:
parts_e.group(0), parts_e.group(1), parts_e.group(2),  parts_e.group(3)
Out[17]:
('http://127.0.0.1:8080', 'http://', None, '127.0.0.1:8080')

Вот особеености свойства group(1) - под номером (0) - исходная строка, таким образом нумерация частей начинается с (1) Вспомним про особенночти в старых индексах [] ... Не помню, но полагаю, что там все начинается с [0] Тогда, если предположить соответсвтвие [0] - group(1), [1]- group(2), [2]-group(3)

In []:
if parts[1]:
        parts[1] = parts[1][:-1]

self.proxies[parts[0] + parts[2]] = parts[1]
In [18]:
parts_e.group(1,3)
Out[18]:
('http://', '127.0.0.1:8080')
In [23]:
parts_e.group(1), parts_e.group(1)[:-1]
Out[23]:
('http://', 'http:/')
In [24]:
parts_e.group(1) + parts_e.group(3)
Out[24]:
'http://127.0.0.1:8080'
In []:
# Cut trailing @
if parts.group(2): # если есть авторизация на прокси сервере
    user_pass = parts.group(2)[:-1]
else:
    user_pass = ''

self.proxies[parts.group(1) + parts.group(3)] = user_pass
In [38]:
%load C:/Users/kiss/Documents/GitHub/dirbot_se1/dirbot/list.txt
In []:
http://94.180.118.34:8080
http://213.141.146.146:8080
http://218.108.232.93:80
http://54.85.145.16:3128

Далее я пытался сообразить, почему не открывается файл

Соображал долго (читал справку..., потом искал в инете...), наконец дошло, что (когда набирал запрос windows), что надо попробовать импортировать OS. Действительно, Питону все эти абсолютные пути не шибко понятны...

In [44]:
import os
In [57]:
proxy_list = "C:/Users/kiss/Documents/GitHub/dirbot_se1/dirbot/list.txt"
fin = open(proxy_list)

proxies = {}
In [43]:
fin.name
Out[43]:
'C:/Users/kiss/Documents/GitHub/dirbot_se1/dirbot/list.txt'
In [47]:
fin.readlines()
Out[47]:
['http://94.180.118.34:8080\n',
 'http://213.141.146.146:8080\n',
 'http://218.108.232.93:80\n',
 'http://54.85.145.16:3128']
In [39]:
help(open)
Help on built-in function open in module __builtin__:

open(...)
    open(name[, mode[, buffering]]) -> file object
    
    Open a file using the file() type, returns a file object.  This is the
    preferred way to open a file.  See file.__doc__ for further information.


In [40]:
file.__doc__
Out[40]:
"file(name[, mode[, buffering]]) -> file object\n\nOpen a file.  The mode can be 'r', 'w' or 'a' for reading (default),\nwriting or appending.  The file will be created if it doesn't exist\nwhen opened for writing or appending; it will be truncated when\nopened for writing.  Add a 'b' to the mode for binary files.\nAdd a '+' to the mode to allow simultaneous reading and writing.\nIf the buffering argument is given, 0 means unbuffered, 1 means line\nbuffered, and larger numbers specify the buffer size.  The preferred way\nto open a file is with the builtin open() function.\nAdd a 'U' to mode to open the file for input with universal newline\nsupport.  Any line ending in the input file will be seen as a '\\n'\nin Python.  Also, a file so opened gains the attribute 'newlines';\nthe value for this attribute is one of None (no newline read yet),\n'\\r', '\\n', '\\r\\n' or a tuple containing all the newline types seen.\n\n'U' cannot be combined with 'w' or '+' mode.\n"
In [59]:
for line in fin.readlines():
            print line
            parts = re.match('(\w+://)(\w+:\w+@)?(.+)', line)
            #pdb.set_trace() # This is after my previous attemt to debug this
            # Cut trailing @
            if parts.group(2): # если есть авторизация на прокси сервере
                user_pass = parts.group(2)[:-1]
            else:
                user_pass = ''

            proxies[parts.group(1) + parts.group(3)] = user_pass
http://94.180.118.34:8080

http://213.141.146.146:8080

http://218.108.232.93:80

http://54.85.145.16:3128

In [63]:
fin.close()
In [62]:
parts.group(1),parts.group(2),parts.group(3)
Out[62]:
('http://', None, '54.85.145.16:3128')

Вот отсюда я отправился разиратся, почему не читается файл Не должен быть словарь пустым....

In [34]:
.keys()
Out[34]:
{}
In [58]:
#А теперь я перезапустил код и вижу, что по умолчанию режим чтения....
fin.mode
Out[58]:
'r'
In [60]:
proxies
Out[60]:
{'http://213.141.146.146:8080': '',
 'http://218.108.232.93:80': '',
 'http://54.85.145.16:3128': '',
 'http://94.180.118.34:8080': ''}

Почему понадобился такой странный словарь? Строка : пароль ... Где юзер??? Пока не будем на это обращат внимание, просто порадуемся, что восстановили словарь...

In [64]:
help(proxies)
Help on dict object:

class dict(object)
 |  dict() -> new empty dictionary
 |  dict(mapping) -> new dictionary initialized from a mapping object's
 |      (key, value) pairs
 |  dict(iterable) -> new dictionary initialized as if via:
 |      d = {}
 |      for k, v in iterable:
 |          d[k] = v
 |  dict(**kwargs) -> new dictionary initialized with the name=value pairs
 |      in the keyword argument list.  For example:  dict(one=1, two=2)
 |  
 |  Methods defined here:
 |  
 |  __cmp__(...)
 |      x.__cmp__(y) <==> cmp(x,y)
 |  
 |  __contains__(...)
 |      D.__contains__(k) -> True if D has a key k, else False
 |  
 |  __delitem__(...)
 |      x.__delitem__(y) <==> del x[y]
 |  
 |  __eq__(...)
 |      x.__eq__(y) <==> x==y
 |  
 |  __ge__(...)
 |      x.__ge__(y) <==> x>=y
 |  
 |  __getattribute__(...)
 |      x.__getattribute__('name') <==> x.name
 |  
 |  __getitem__(...)
 |      x.__getitem__(y) <==> x[y]
 |  
 |  __gt__(...)
 |      x.__gt__(y) <==> x>y
 |  
 |  __init__(...)
 |      x.__init__(...) initializes x; see help(type(x)) for signature
 |  
 |  __iter__(...)
 |      x.__iter__() <==> iter(x)
 |  
 |  __le__(...)
 |      x.__le__(y) <==> x<=y
 |  
 |  __len__(...)
 |      x.__len__() <==> len(x)
 |  
 |  __lt__(...)
 |      x.__lt__(y) <==> x<y
 |  
 |  __ne__(...)
 |      x.__ne__(y) <==> x!=y
 |  
 |  __repr__(...)
 |      x.__repr__() <==> repr(x)
 |  
 |  __setitem__(...)
 |      x.__setitem__(i, y) <==> x[i]=y
 |  
 |  __sizeof__(...)
 |      D.__sizeof__() -> size of D in memory, in bytes
 |  
 |  clear(...)
 |      D.clear() -> None.  Remove all items from D.
 |  
 |  copy(...)
 |      D.copy() -> a shallow copy of D
 |  
 |  fromkeys(...)
 |      dict.fromkeys(S[,v]) -> New dict with keys from S and values equal to v.
 |      v defaults to None.
 |  
 |  get(...)
 |      D.get(k[,d]) -> D[k] if k in D, else d.  d defaults to None.
 |  
 |  has_key(...)
 |      D.has_key(k) -> True if D has a key k, else False
 |  
 |  items(...)
 |      D.items() -> list of D's (key, value) pairs, as 2-tuples
 |  
 |  iteritems(...)
 |      D.iteritems() -> an iterator over the (key, value) items of D
 |  
 |  iterkeys(...)
 |      D.iterkeys() -> an iterator over the keys of D
 |  
 |  itervalues(...)
 |      D.itervalues() -> an iterator over the values of D
 |  
 |  keys(...)
 |      D.keys() -> list of D's keys
 |  
 |  pop(...)
 |      D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
 |      If key is not found, d is returned if given, otherwise KeyError is raised
 |  
 |  popitem(...)
 |      D.popitem() -> (k, v), remove and return some (key, value) pair as a
 |      2-tuple; but raise KeyError if D is empty.
 |  
 |  setdefault(...)
 |      D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D
 |  
 |  update(...)
 |      D.update([E, ]**F) -> None.  Update D from dict/iterable E and F.
 |      If E present and has a .keys() method, does:     for k in E: D[k] = E[k]
 |      If E present and lacks .keys() method, does:     for (k, v) in E: D[k] = v
 |      In either case, this is followed by: for k in F: D[k] = F[k]
 |  
 |  values(...)
 |      D.values() -> list of D's values
 |  
 |  viewitems(...)
 |      D.viewitems() -> a set-like object providing a view on D's items
 |  
 |  viewkeys(...)
 |      D.viewkeys() -> a set-like object providing a view on D's keys
 |  
 |  viewvalues(...)
 |      D.viewvalues() -> an object providing a view on D's values
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __hash__ = None
 |  
 |  __new__ = <built-in method __new__ of type object>
 |      T.__new__(S, ...) -> a new object with type S, a subtype of T


In [66]:
proxies.keys()
Out[66]:
['http://213.141.146.146:8080',
 'http://94.180.118.34:8080',
 'http://54.85.145.16:3128',
 'http://218.108.232.93:80']
In [68]:
proxies.keys()[0]
Out[68]:
'http://213.141.146.146:8080'
In [69]:
del proxies[proxy]
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-69-1ec6f1ff3ae9> in <module>()
----> 1 proxies[0]

KeyError: 0
In [71]:
proxies.items()[1]
Out[71]:
('http://94.180.118.34:8080', '')
In []:
Теперь попробуем стереть часть словаря, но код из randomproxy.py не работает:
In [72]:
del proxies.items()[1]

Как же заменить убрать запись в словаре?

Далее я сделал ошибку, надо было попробовать, как в первоисточнике, а я начал изобретать велосипед...

Я нашел .pop() и .popitem() методы, иллюстрация внизу:

In [73]:
proxies.items()
Out[73]:
[('http://213.141.146.146:8080', ''),
 ('http://94.180.118.34:8080', ''),
 ('http://54.85.145.16:3128', ''),
 ('http://218.108.232.93:80', '')]
In [74]:
proxies.pop('http://94.180.118.34:8080')
Out[74]:
''
In [75]:
proxies.items()
Out[75]:
[('http://213.141.146.146:8080', ''),
 ('http://54.85.145.16:3128', ''),
 ('http://218.108.232.93:80', '')]
In [76]:
proxies.popitem()[1]
Out[76]:
''
In [77]:
proxies.items()
Out[77]:
[('http://54.85.145.16:3128', ''), ('http://218.108.232.93:80', '')]

Можно стереть запись задав значение по ключу, можно по номеру записи в словаре.

Зачем здесь @classmethod

In []:
 @classmethod
 def from_crawler(cls, crawler):
    return cls(crawler.settings)

Очевидно, для того, чтобы получить настройки из краулера (там, кстати, тоже можно умтановить прокси...). Но вот, как этот метод вызывается я пока не понимаю. В Лутце (стр. 894) в примере со счетчиков вызовов класса написано:

Интересно отметить, что аналогичные действия можно реализовать с помощью метода класса – следующий класс обладает тем же поведением, что и класс со статическим методом, представленный выше, но в нем используется метод класса, который в первом аргументе принимает класс экземпляра. Методы класса автоматически получают объект класса:

In []:
class Spam:
    numInstances = 0 # Вместо статического метода используется метод класса
    def __init__(self):
        Spam.numInstances += 1
    def printNumInstances(cls):
        print(Number of instances:, cls.numInstances)
    printNumInstances = classmethod(printNumInstances)

Используется этот класс точно так же, как и предыдущая версия, но его метод printNumInstances принимает объект класса, а не экземпляра, независимо от того, вызывается он через имя класса или через экземпляр

В настоящее время статические методы, к примеру, могут быть оформлены в виде декораторов, как показано ниже:

In []:
class C:
    @staticmethod # Синтаксис декорирования
    def meth():
    ...

С технической точки зрения, это объявление имеет тот же эффект, что и фрагмент ниже (передача функции декоратору и присваивание результата первоначальному имени функции):

In []:
class C:
    def meth():
    ...
    meth = staticmethod(meth) # Повторное присваивание имени

Результат, возвращаемый функцией-декоратором, повторно присваивается имени метода. В результате вызов метода по имени функции фактически будет приводить к вызову результата, полученному от декоратора staticmethod.

def process_request - это обязятельный метод Writing your own downloader middleware

Этот метод и вызывается в загрузчике This method is called for each request that goes through the download middleware

Проверим, работают ли функции

In [79]:
import random
In [81]:
random.choice(proxies.keys()),random.choice(proxies.keys()), random.choice(proxies.keys())
Out[81]:
('http://54.85.145.16:3128',
 'http://218.108.232.93:80',
 'http://218.108.232.93:80')
In []:
 
In []:
C:\Users\kiss\Documents\GitHub\dirbot_se1>scrapy crawl dmoz
2014-10-21 21:02:11+0400 [scrapy] INFO: Scrapy 0.20.1 started (bot: scrapybot)
2014-10-21 21:02:11+0400 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2014-10-21 21:02:11+0400 [scrapy] DEBUG: Overridden settings: {'DEFAULT_ITEM_CLASS': 'dirbot.items.Website', 'NEWSPIDER_MODULE': 'di
rbot.spiders', 'SPIDER_MODULES': ['dirbot.spiders'], 'RETRY_TIMES': 10, 'RETRY_HTTP_CODES': [500, 503, 504, 400, 403, 404, 408]}
2014-10-21 21:02:13+0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderStat
e
2014-10-21 21:02:14+0400 [scrapy] DEBUG: Enabled downloader middlewares: RetryMiddleware, RandomProxy, HttpAuthMiddleware, DownloadT
imeoutMiddleware, UserAgentMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddlewar
e, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-10-21 21:02:14+0400 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlL
engthMiddleware, DepthMiddleware
C:\Users\kiss\Anaconda\lib\site-packages\scrapy\contrib\pipeline\__init__.py:21: ScrapyDeprecationWarning: ITEM_PIPELINES defined as
 a list or a set is deprecated, switch to a dict
  category=ScrapyDeprecationWarning, stacklevel=1)
2014-10-21 21:02:14+0400 [scrapy] DEBUG: Enabled item pipelines: FilterWordsPipeline
2014-10-21 21:02:14+0400 [dmoz] INFO: Spider opened
2014-10-21 21:02:14+0400 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-10-21 21:02:14+0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6031
2014-10-21 21:02:14+0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-10-21 21:02:35+0400 [scrapy] INFO: Removing failed proxy <http://218.108.232.93:80>, 4 proxies left
2014-10-21 21:02:35+0400 [dmoz] DEBUG: Retrying <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (failed 1 ti
mes): TCP connection timed out: 10060: ╧юя√Єър єёЄрэютшЄ№ ёюхфшэхэшх с√ыр схчєёях°эющ, Є.ъ. юЄ фЁєуюую ъюья№■ЄхЁр чр ЄЁхсєхьюх тЁхь 
 эх яюыєўхэ эєцэ√щ юЄъышъ, шыш с√ыю ЁрчюЁтрэю єцх єёЄрэютыхээюх ёюхфшэхэшх шч-чр эхтхЁэюую юЄъышър єцх яюфъы■ўхээюую ъюья№■ЄхЁр..
2014-10-21 21:02:35+0400 [scrapy] INFO: Removing failed proxy <http://218.108.232.93:80>, 3 proxies left
2014-10-21 21:02:35+0400 [dmoz] ERROR: Error downloading <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/>

        Traceback (most recent call last):
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 423, in errback
            self._startRunCallbacks(fail)
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
        --- <exception caught here> ---
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "C:\Users\kiss\Anaconda\lib\site-packages\scrapy\core\downloader\middleware.py", line 57, in process_exception
            response = method(request=request, exception=exception, spider=spider)
          File "dirbot\randomproxy.py", line 51, in process_exception
            self.proxies.pop(proxy)
        exceptions.KeyError: 'http://218.108.232.93:80'

2014-10-21 21:02:56+0400 [scrapy] INFO: Removing failed proxy <http://218.108.232.93:80>, 3 proxies left
2014-10-21 21:02:56+0400 [dmoz] ERROR: Error downloading <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/>
        Traceback (most recent call last):
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 423, in errback
            self._startRunCallbacks(fail)
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
        --- <exception caught here> ---
          File "C:\Users\kiss\Anaconda\lib\site-packages\twisted\internet\defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "C:\Users\kiss\Anaconda\lib\site-packages\scrapy\core\downloader\middleware.py", line 57, in process_exception
            response = method(request=request, exception=exception, spider=spider)
          File "dirbot\randomproxy.py", line 51, in process_exception
            self.proxies.pop(proxy)
        exceptions.KeyError: 'http://218.108.232.93:80'

2014-10-21 21:02:56+0400 [dmoz] INFO: Closing spider (finished)
2014-10-21 21:02:56+0400 [dmoz] INFO: Dumping Scrapy stats:
        {'downloader/exception_count': 3,
         'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 3,
         'downloader/request_bytes': 793,
         'downloader/request_count': 3,
         'downloader/request_method_count/GET': 3,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2014, 10, 21, 17, 2, 56, 480000),
         'log_count/DEBUG': 7,
         'log_count/ERROR': 2,
         'log_count/INFO': 6,
         'scheduler/dequeued': 3,
         'scheduler/dequeued/memory': 3,
         'scheduler/enqueued': 3,
         'scheduler/enqueued/memory': 3,
         'start_time': datetime.datetime(2014, 10, 21, 17, 2, 14, 361000)}
2014-10-21 21:02:56+0400 [dmoz] INFO: Spider closed (finished)

C:\Users\kiss\Documents\GitHub\dirbot_se1>

Видим, что строчка Removing failed proxy http://218.108.232.93:80, 3 proxies left посторяется три раза, сразу пробуем "тупо откатить" (прежде, чем тратить время на понимание ошибки):

In [85]:
proxies
Out[85]:
{'http://218.108.232.93:80': '', 'http://54.85.145.16:3128': ''}
In [86]:
proxies['http://218.108.232.93:80']
Out[86]:
''
In []:
Вот так выглядит первоначальный код, он выполняется..., а я выыше грубо ошибся, посколькупрбовал **del proxies.items()[1]**
In [87]:
del proxies['http://218.108.232.93:80']
In [88]:
proxies
Out[88]:
{'http://54.85.145.16:3128': ''}

Пробуем изменить строчку на первоначальную и опять получаем те же ошибки, пора разобраться с командой del ... Ответ находим в документации 2. Built-in Functions

In []:
delattr(object, name)
This is a relative of setattr(). The arguments are an object and a string. 
The string must be the name of one of the objects attributes. 
The function deletes the named attribute, provided the object allows it. 

For example, delattr(x, 'foobar') is equivalent to del x.foobar.


Посты чуть ниже также могут вас заинтересовать

Комментариев нет:

Отправить комментарий