Why entity framework is better




















Stop ; Console. WriteLine "ADO. ToList ; stopwatch. The average execution time is shown in the image: ADO. NET took only 2 milliseconds whether Entity Framework took more than 4 milliseconds.

Result in second time execution When I ran this method again and again in single run. Why EF second time execution was faster than first time execution?

Improve this question. Paolo Moretti Niventh Niventh 2 2 gold badges 9 9 silver badges 23 23 bronze badges. Add a comment. Active Oldest Votes. First time EF loads metadata into memory, that takes a time. It builds in-memory representation of model from edmx file, or from source code if you are using code first.

NET, so it can't be faster. But it makes development much faster. And improves maintainability of your code. Improve this answer. Sergey Berezovskiy Sergey Berezovskiy k 34 34 gold badges silver badges bronze badges. We should not compare ADO. NET with EF just for simple data fetch. There is a lot going on under the hood that you otherwise would have to code manually. Second very simply put, the JIT compiler compiles the code for the first time just when it is executed.

This includes memory allocation and all sorts of initializations. Jens H Jens H 4, 2 2 gold badges 24 24 silver badges 34 34 bronze badges.

This certainly allows more freedom in operating the database, but there is a greater risk of making a mistake when writing a SQL query. Similarly to updating the database schema, EF Core can create changes and generate a migration by itself, and in Dapper, you have to manually edit the SQL code.

There is no doubt, however, that Dapper has its supporters, mainly due to its performance. On the blog exceptionnotfound. Tracking changes to entities in EF Core can be turned off with the AsNoTracking option, which makes reading operations significantly faster. All in all — Dapper is much faster to read from the database and will certainly be comparatively fast when writing. However, it requires writing SQL queries, which can expose the developer to errors.

I have personally used Dapper on several projects, and basically, only one has been dictated by performance. For the simple logic of saving and retrieving data from the database, I would use Entity Framework Core because of its simplicity and convenience in introducing changes. May you should also compare agains linq2db, wich has some of the ef core features linq but also some not change tracking. The first is to make the LINQ statements themselves less generic, perhaps by using logic like this:.

An alternative is to make SQL Server recompile the plans each time. This will add a few milliseconds more CPU on each execution, which would likely only be a problem if the query is one that runs very frequently, or the server is CPU-limited already. You can write a class a little like this:. In spite of the previous example, the reuse of execution plans is almost always a good thing because it avoids the need to regenerate a plan each time a query is run.

In order for a plan to be reused, the statement text must be identical, which as we just saw, is the case for parameterized queries. Skip or. This is bad for several reasons. Firstly it causes an immediate performance hit because Entity Framework has to generate a new query each time, and SQL Server has to generate a new execution plan.

Secondly, it significantly increases the memory used both by Entity Framework, which caches all the extra queries, and in SQL Server, which caches the plans even though they are unlikely to be reused.

There are two things you can do about this. EF 6 includes versions of Skip and Take which take a lambda instead of an int, enabling it to see that variables have been used, and parameterize the query. So we can write the following you need to ensure you reference System. Entity :. The performance consequences of this are not good if you need to insert a lot of data! You can use a NuGet package, EF. BulkInsert, which batches up Insert statements instead, in much the way that the SqlBulkCopy class does.

This approach is also supported out of the box in Entity Framework 7 released Q1 Sometimes the way we access data causes the client application to do extra work without the database itself being affected.

In the line-level timing information, we can see that almost all of the time over 34 seconds in total was spent in adding Pupils to our context, but that the process of actually writing changes out to the database took a little over 1 second of which only ms was spent in running the queries. Core namespace. So the time is all being spent tracking changes.

Entity Framework will do this by default any time that you add or modify entities, so as you modify more entities, things get slower. AddRange command, which is much faster because it is optimized for bulk insert. Here is the code:. In more complex cases, such as bulk import of multiple classes, you might consider disabling change tracking, which you can do by writing:. Rerunning with that change in place, we can see that saving changes to the database still takes a little over a second, but the time spent adding the entities to the context has been reduced from 34 seconds down to 85 ms — a x speed boost!

That adds extra overhead, and also significantly increases memory requirements. This is particularly problematic when retrieving larger data sets. If you know you only want to read data from a database for example in an MVC Controller which is just fetching data to pass to a View you can explicitly tell Entity Framework not to do this tracking:.

The importance of startup time varies by application. Fortunately there are some things we can do to get EF starting up quickly. Ordinarily, when EF is first used, it must generate views which are used to work out what queries to run.

This work is only done once per app domain, but it can certainly be time consuming. When you have this installed, right click on your context file, then from the Entity Framework menu, choose Generate Views.

A new file will be added to your project. But this one is well worth doing, particularly for more complex models. Note: for an in-depth article on precompiling views, including a way to precompile in code, then see this article.

Even if you precompile views, Entity Framework still has to do work when a context is first initialized, and that work is proportional to the number of entities in your model.

However, a common way of working with EF is to automatically generate a context from a pre-existing database, and to simply import all objects.

At the time this feels prudent as it maximizes your ability to work with the database. Since even fairly modest databases can contain hundreds of objects, the performance implications quickly get out of control, and startup times can range in the minutes. Most assemblies in the. On slower machines this can take several seconds and will probably take at least a couple of seconds even on a decent machine. Just run commands like the following:.

We start to get into small gains at this point, but on startup EF can run several queries against the database. By way of example, it starts with a query to find the SQL Server edition which might take a few tens of milliseconds. We then create a class which inherits from DbConfig uration , and in its constructor, set the ManifestTokenResolver to our custom class.

Entity Framework supports Multiple Result Sets, which allows it to make and receive multiple requests to SQL Server over a single connection, reducing the number of roundtrips. Just make sure your connection string contains:. In most data access scenarios performance will degrade with the volume of data, and in some cases time taken can even rise exponentially with data volumes.

There may also be times when you need to fall back to stored procedures which can be run by Entity Framework , or even different technologies entirely. Teams often run into performance difficulties with Entity Framework, particularly when starting out.

Some will even consider abandoning the technology entirely, often after relatively little time trying to get it to work well. Start with a performance profiler that lets you understand both what your application code is doing, and what database queries it runs. SQL Monitor helps you keep track of your SQL Server performance, and if something does go wrong it gives you the answers to find and fix problems fast.

Find out more.



0コメント

  • 1000 / 1000