My scrapy project doesnt store the data in a file









up vote
1
down vote

favorite












My spider contains the following code i just started working with scrapy
i am tying to my anime list top 50 anime and do something with it
:



import scrapy 


class AnimeSpider(scrapy.Spider):
name="animelist"
start_urls=[
'https://myanimelist.net/topanime.php'
]

def parse(self,response):
for anime in response.css('tr.rankinglist'):
yield
'name':anime.css("div.di-ib clearfix >a::text").extract_first(),
'score':anime.css(".js-top-ranking-score-col di-ib al > span.text::text").extract_first(),


next_page=response.css('a.link-blue-box next::attr("href")').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)


when i run my code all these things happen and i understand none of these to be honest:



berseker@berseker-Inspiron-15-3567:~/python$ scrapy runspider anime_spider.py -o anime.json
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) - [GCC 7.2.0], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.0-38-generic-x86_64-with-debian-stretch-sid
2018-11-09 12:02:20 [scrapy.crawler] INFO: Overridden settings: 'FEED_FORMAT': 'json', 'FEED_URI': 'anime.json', 'SPIDER_LOADER_WARN_ONLY': True
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled item pipelines:

2018-11-09 12:02:20 [scrapy.core.engine] INFO: Spider opened
2018-11-09 12:02:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 12:02:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-09 12:02:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://myanimelist.net/topanime.php> (referer: None)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 12:02:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
'downloader/request_bytes': 226,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 16558,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 9, 6, 32, 23, 224266),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'memusage/max': 53026816,
'memusage/startup': 53026816,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 11, 9, 6, 32, 20, 685482)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Spider closed (finished)


after all this my file anime.json stays empty why does this happen
what am i doing wrong ?










share|improve this question





















  • Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
    – Ayush Kumar
    Nov 9 at 9:19















up vote
1
down vote

favorite












My spider contains the following code i just started working with scrapy
i am tying to my anime list top 50 anime and do something with it
:



import scrapy 


class AnimeSpider(scrapy.Spider):
name="animelist"
start_urls=[
'https://myanimelist.net/topanime.php'
]

def parse(self,response):
for anime in response.css('tr.rankinglist'):
yield
'name':anime.css("div.di-ib clearfix >a::text").extract_first(),
'score':anime.css(".js-top-ranking-score-col di-ib al > span.text::text").extract_first(),


next_page=response.css('a.link-blue-box next::attr("href")').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)


when i run my code all these things happen and i understand none of these to be honest:



berseker@berseker-Inspiron-15-3567:~/python$ scrapy runspider anime_spider.py -o anime.json
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) - [GCC 7.2.0], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.0-38-generic-x86_64-with-debian-stretch-sid
2018-11-09 12:02:20 [scrapy.crawler] INFO: Overridden settings: 'FEED_FORMAT': 'json', 'FEED_URI': 'anime.json', 'SPIDER_LOADER_WARN_ONLY': True
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled item pipelines:

2018-11-09 12:02:20 [scrapy.core.engine] INFO: Spider opened
2018-11-09 12:02:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 12:02:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-09 12:02:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://myanimelist.net/topanime.php> (referer: None)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 12:02:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
'downloader/request_bytes': 226,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 16558,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 9, 6, 32, 23, 224266),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'memusage/max': 53026816,
'memusage/startup': 53026816,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 11, 9, 6, 32, 20, 685482)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Spider closed (finished)


after all this my file anime.json stays empty why does this happen
what am i doing wrong ?










share|improve this question





















  • Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
    – Ayush Kumar
    Nov 9 at 9:19













up vote
1
down vote

favorite









up vote
1
down vote

favorite











My spider contains the following code i just started working with scrapy
i am tying to my anime list top 50 anime and do something with it
:



import scrapy 


class AnimeSpider(scrapy.Spider):
name="animelist"
start_urls=[
'https://myanimelist.net/topanime.php'
]

def parse(self,response):
for anime in response.css('tr.rankinglist'):
yield
'name':anime.css("div.di-ib clearfix >a::text").extract_first(),
'score':anime.css(".js-top-ranking-score-col di-ib al > span.text::text").extract_first(),


next_page=response.css('a.link-blue-box next::attr("href")').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)


when i run my code all these things happen and i understand none of these to be honest:



berseker@berseker-Inspiron-15-3567:~/python$ scrapy runspider anime_spider.py -o anime.json
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) - [GCC 7.2.0], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.0-38-generic-x86_64-with-debian-stretch-sid
2018-11-09 12:02:20 [scrapy.crawler] INFO: Overridden settings: 'FEED_FORMAT': 'json', 'FEED_URI': 'anime.json', 'SPIDER_LOADER_WARN_ONLY': True
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled item pipelines:

2018-11-09 12:02:20 [scrapy.core.engine] INFO: Spider opened
2018-11-09 12:02:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 12:02:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-09 12:02:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://myanimelist.net/topanime.php> (referer: None)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 12:02:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
'downloader/request_bytes': 226,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 16558,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 9, 6, 32, 23, 224266),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'memusage/max': 53026816,
'memusage/startup': 53026816,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 11, 9, 6, 32, 20, 685482)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Spider closed (finished)


after all this my file anime.json stays empty why does this happen
what am i doing wrong ?










share|improve this question













My spider contains the following code i just started working with scrapy
i am tying to my anime list top 50 anime and do something with it
:



import scrapy 


class AnimeSpider(scrapy.Spider):
name="animelist"
start_urls=[
'https://myanimelist.net/topanime.php'
]

def parse(self,response):
for anime in response.css('tr.rankinglist'):
yield
'name':anime.css("div.di-ib clearfix >a::text").extract_first(),
'score':anime.css(".js-top-ranking-score-col di-ib al > span.text::text").extract_first(),


next_page=response.css('a.link-blue-box next::attr("href")').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)


when i run my code all these things happen and i understand none of these to be honest:



berseker@berseker-Inspiron-15-3567:~/python$ scrapy runspider anime_spider.py -o anime.json
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-11-09 12:02:20 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) - [GCC 7.2.0], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o 27 Mar 2018), cryptography 2.2.2, Platform Linux-4.15.0-38-generic-x86_64-with-debian-stretch-sid
2018-11-09 12:02:20 [scrapy.crawler] INFO: Overridden settings: 'FEED_FORMAT': 'json', 'FEED_URI': 'anime.json', 'SPIDER_LOADER_WARN_ONLY': True
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 12:02:20 [scrapy.middleware] INFO: Enabled item pipelines:

2018-11-09 12:02:20 [scrapy.core.engine] INFO: Spider opened
2018-11-09 12:02:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 12:02:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-11-09 12:02:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://myanimelist.net/topanime.php> (referer: None)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 12:02:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
'downloader/request_bytes': 226,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 16558,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 9, 6, 32, 23, 224266),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'memusage/max': 53026816,
'memusage/startup': 53026816,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 11, 9, 6, 32, 20, 685482)
2018-11-09 12:02:23 [scrapy.core.engine] INFO: Spider closed (finished)


after all this my file anime.json stays empty why does this happen
what am i doing wrong ?







python web-scraping scrapy scrapy-spider






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 9 at 6:46









ASHUTOSH SINGH

62




62











  • Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
    – Ayush Kumar
    Nov 9 at 9:19

















  • Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
    – Ayush Kumar
    Nov 9 at 9:19
















Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
– Ayush Kumar
Nov 9 at 9:19





Just to test whether your selector is working fine or not, you should run scrapy shell "https://myanimelist.net/topanime.php" in the terminal/cmd and then running the selectors like response.css('tr.rankinglist') will show you the output that you will get. and there you can fine tune your selectors accordingly
– Ayush Kumar
Nov 9 at 9:19













1 Answer
1






active

oldest

votes

















up vote
1
down vote













your selector



response.css('tr.rankinglist')


is wrong, it returns empty list so you don't have any items.
Class is ranking-list. Selector should be



response.css('tr.ranking-list')


or



response.xpath("//tr[@class='ranking-list'])





share|improve this answer






















  • thanks, it worked I really need to fresh up my css before this.
    – ASHUTOSH SINGH
    Nov 9 at 9:57










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53221017%2fmy-scrapy-project-doesnt-store-the-data-in-a-file%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













your selector



response.css('tr.rankinglist')


is wrong, it returns empty list so you don't have any items.
Class is ranking-list. Selector should be



response.css('tr.ranking-list')


or



response.xpath("//tr[@class='ranking-list'])





share|improve this answer






















  • thanks, it worked I really need to fresh up my css before this.
    – ASHUTOSH SINGH
    Nov 9 at 9:57














up vote
1
down vote













your selector



response.css('tr.rankinglist')


is wrong, it returns empty list so you don't have any items.
Class is ranking-list. Selector should be



response.css('tr.ranking-list')


or



response.xpath("//tr[@class='ranking-list'])





share|improve this answer






















  • thanks, it worked I really need to fresh up my css before this.
    – ASHUTOSH SINGH
    Nov 9 at 9:57












up vote
1
down vote










up vote
1
down vote









your selector



response.css('tr.rankinglist')


is wrong, it returns empty list so you don't have any items.
Class is ranking-list. Selector should be



response.css('tr.ranking-list')


or



response.xpath("//tr[@class='ranking-list'])





share|improve this answer














your selector



response.css('tr.rankinglist')


is wrong, it returns empty list so you don't have any items.
Class is ranking-list. Selector should be



response.css('tr.ranking-list')


or



response.xpath("//tr[@class='ranking-list'])






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 9 at 10:15









starrify

10.3k42241




10.3k42241










answered Nov 9 at 8:54









E. Amanatov

464




464











  • thanks, it worked I really need to fresh up my css before this.
    – ASHUTOSH SINGH
    Nov 9 at 9:57
















  • thanks, it worked I really need to fresh up my css before this.
    – ASHUTOSH SINGH
    Nov 9 at 9:57















thanks, it worked I really need to fresh up my css before this.
– ASHUTOSH SINGH
Nov 9 at 9:57




thanks, it worked I really need to fresh up my css before this.
– ASHUTOSH SINGH
Nov 9 at 9:57

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53221017%2fmy-scrapy-project-doesnt-store-the-data-in-a-file%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)