SQL Timeout with dotnet core in Docker when getting somewhat larger than usual DB results from local SQL Server

SQL Timeout with dotnet core in Docker when getting somewhat larger than usual DB results from local SQL Server



We are developing a .NET Core project in Docker for Windows and we need to retrieve the list of countries (~250records) with about 9 additional columns, which are not longer than guids. While developing, each developer is using a local instance of Microsoft SQL Server 2017 Developer edition.



The problem is, when selecting these ~250 records, some new and powerful workstations have no problem (although this implies they might have problems for larger queries). However, some weaker and older laptops cannot execute the query, and SQL Server throws a timeout. We thought this was a RAM problem, and increased the RAM of the laptops, but issue is not resolved.



The odd thing is that there is a record count threshold which the timeout is thrown. I don't remember the exact number for each laptop, but let's say 120. When number of records is 120 then the query is returned successfully in about 2-3 seconds. However once the number of queries become 121, the timeout exception is thrown. And there is no issue when we use a remote SQL Server, although that server is a VM sharing the host with 6 other VMs and works fairly slow when connected via SQL Server Management Studio. Similarly, managing the local server with SQL Server Management Studio is fast, and the very same query executes in about 1 second.



What could be the problem with this behavior? Is there somehow a limit on Docker's TCP when communicating with the host?



Edit: Adding the part of code that i retrieve the records:


public List<CountryDTO> Get(FilterParameters filterParameters)

IQueryable<Country> query = dbContext.Countries.OrderBy(w => w.Name);

//...Some basic filtering code, but does not execute since filter is empty
var entityList = query.ToList();
return Mapper.Map<List<CountryDTO>>(entityList);






This sounds more like a DB/query design problem than Docker. How complex is the query? Stored procedure or statement? Can you share your [relevant] DB design and query?

– brandon-barnett
Jun 2 '18 at 17:07






Well the statement is not complex at all. Though I am using dotnet EF to access the DB, might be a good idea to investigate the load that creates. I added the relevant code in the question.

– Yamaç Kurtuluş
Jun 4 '18 at 10:21






I seem to have the same problem, or at least a similar one. SqlClient.SqlCommand.ExecuteReader fails if the SQL statement tries to fetch "too many" columns from a single row using a simple SELECT ... FROM ... INNER JOIN ... WHERE ..., whether by stored procedure or not. If I run the project directly outside docker, everything works fine.

– cmeeren
Aug 10 '18 at 8:49


SqlClient.SqlCommand.ExecuteReader


SELECT ... FROM ... INNER JOIN ... WHERE ...






I posted an issue here, hoping it leads somewhere.

– cmeeren
Aug 10 '18 at 9:31






You mentioned that SQL Server throws a timeout but, to be clear, it is the client API that is throwing the timeout error. SQL Server will happily execute a query forever. I would focus on network. It seems the result set size may be a factor where results requiring more than one round trip on the slower clients experience the problem. Try a Wire Shark or similar trace.

– Dan Guzman
Sep 16 '18 at 22:06




2 Answers
2



in your case , you are facing at least 3 matter which must be checked



1.script you are using (you can execute script directly in your sql and check duration of executing, if have to get exact script which your program send it to the sql (for this you can run sql profiler and run your software to get it))



3.casting result from your db to software object (you can handle that with diagnosis tools such as jetBrain)



after detecting your problem state you can going to further step



Please tell me when you are sure that the problem is for what stage,i can tell you some solutions






SQL runs just fine, even when the query is executed from within container, the SQL profiler states that it is completed successfully, so it looks like packets are lost at somewhere inside the connection. 2. Wireshark is a good idea, I hope it can watch the docker network, i'll try it. 3. Checked that it is not a problem.

– Yamaç Kurtuluş
Sep 11 '18 at 7:45






you have to turn on traces for long duration and blocked and dead locks and performance counter(memory usage and cpu usage) too to check what happen in some of your laptops, if you have multiple connection to sql server you can helped by sql logging process which logs everything

– Alireza Yadegari
Sep 14 '18 at 13:17




As this seems to be a problem specifically with local docker instance and not a SQL Server problem, I am suggesting below steps to handle this issue.






Thanks for the input. These solutions are doable, however it would severely impact our development if the problem persists. Because it is unpredictable and could occur at any other time, say, while trying to get a smaller table for instance.

– Yamaç Kurtuluş
Sep 11 '18 at 7:43



Thanks for contributing an answer to Stack Overflow!



But avoid



To learn more, see our tips on writing great answers.



Required, but never shown



Required, but never shown




By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)