当前位置: 首页 > 工具软件 > dasBlog > 使用案例 >

ASP.NET性能调整-使dasBlog更快...

詹联
2023-12-01

Greg Hughes and I stayed up late last night tuning and installing a custom build of dasBlog. If you remember, dasBlog is Clemens' rewrite/imagining of Chris Anderson's BlogX codebase that has been moved over to GotDotNet and is under the supervision of Omar Shahine.

格雷格·休斯( Greg Hughes)和我昨晚熬夜调整和安装了自定义版本的dasBlog 。 如果您还记得,dasBlog是ClemensChris Anderson的BlogX代码库重写/想象,代码库已移至GotDotNet并在Omar Shahine的监督下。

ORCSWeb, my awesome ISP, and Scott Forsyth (ASP.NET MVP who works there) had complained to me that as my traffic increased, my website instance was being a poor citizen on the shared server. My site is on a server with something like a dozen other sites. While I'd survived slashdotting, my traffic lately has been getting big enough to bother the server.

ORCSWeb (我的出色ISP)和Scott Forsyth (在那工作的ASP.NET MVP)向我抱怨说,随着访问量的增加,我的网站实例在共享服务器上沦为穷人。 我的网站位于一台服务器上,其中有许多其他网站。 虽然我幸免于难,但最近我的流量已经足够大,足以打扰服务器。

ScottF had noticed that my blog had these unfortunate characteristics (remember these are bad):

ScottF已经注意到我的博客具有这些不幸的特征(请记住,这些都是不好的):

  • CPU Threads that were taking minutes to complete their work

    几分钟才能完成工作的CPU线程

  • Caused 9,538,000 disk reads during a time period while another site on the same server with twice as many visitors had 47,000 reads.

    在同一时间段内导致9,538,000个磁盘读取,而同一服务器上另一个访问者数量翻倍的站点进行47,000次读取。
  • My process was tied for CPU time with "system."

    我的进程在CPU时间上与“系统”捆绑在一起。
  • I used 2 hours, 20 seconds of CPU time one day. My nearest competitor had used only 20 seconds.

    我一天花了2个小时20秒的CPU时间。 我最近的竞争对手只用了20秒。
  • I was 2nd for disk reads, and 11th for disk writes (the writes weren't bad)

    我在磁盘读取方面名列第二,在磁盘写入方面名列第11(写得还不错)
  • In a day, I surpassed even the backup process which was running for a WEEK.

    一天之内,我甚至超过了为一周运行的备份过程。

These bullets, of course, are quite BAD. So, during my recent burst of creativity when I added a number of features to dasBlog including a comment spam solution, a referral spam solution, and an IP address blacklist, I did some formal performance work.

当然,这些子弹非常糟糕。 因此,在最近的创造力激增期间,我向dasBlog添加了许多功能,包括评论垃圾邮件解决方案引荐垃圾邮件解决方案IP地址黑名单,我做了一些正式的性能工作。

If you're familiar with dasBlog, I yanked the need for entryCache.xml, categoryCache.xml and blogData.xml, which were older BlogX hold overs, and move them into thread-safe in memory storage. I change the EntryIDCache and other internal caches, and added outputcaching for RSS, Atom, and Permalinks.

如果您熟悉dasBlog,我会急需使用更旧的BlogX保留项的entryCache.xml,categoryCache.xml和blogData.xml,并将它们移到内存存储中的线程安全中。 我更改了EntryIDCache和其他内部缓存,并为RSS,Atom和永久链接添加了outputcaching。

According to ScottF and the folks at Orcsweb and their initial measurements "from what I can tell today, this *is* 250% better. CPU used only 20 minutes [as opposed to nearly 2.5 hours] of time by the end of the day and disk IO was much less than normal." This is early, but we'll see if these numbers hold.

据ScottF和Orcsweb的人们和他们最初的测量“f光盘今天我可以告诉什么,这*是* 250%更好。CPU只用20分钟的一天结束[相对于近2.5小时]时间而且磁盘IO比正常情况要少得多。这还早,但是我们将看看这些数字是否成立。

I seem to have a few other bugs to work out, so hollar at me if the site's goofed, but otherwise I hope to get Omar to integrate these changes into his own great new stuff coming in dasBlog 1.7.

我似乎还有一些其他的错误需要解决,因此如果该站点出现问题,请嘲笑我,但是否则我希望让Omar将这些更改集成到dasBlog 1.7中他自己的出色新功能中。

During this perf test, I used Perfmon, CLR Profiler and other tools, but mostly I thought. Just literally say down and thought about it. I tried to understand the full call stack of a single request. Once you really know what's going on, and can visualize it, you're in a much better position to profile.

在此性能测试期间,我使用了Perfmon,CLR Profiler和其他工具,但我大多认为。 只是从字面上说下来并考虑一下。 我试图了解单个请求的完整调用堆栈。 一旦真正了解正在发生的事情并可以对其进行可视化,就可以更好地进行分析。

Since you are a technical group, here's a few tidbits I found during this process.

由于您是技术团队,因此我在此过程中发现了一些小窍门。

  • If some condition can allow you to avoid accesses to expensive resources and bail early, do. For this blog, if an entry isn't found (based on the GUID in the URL) in my cache, now I won't even look in the XML database. Additionally, I'll send a 404, use Response.SupressContent and End the Response.

    如果某些情况可以使您避免访问昂贵的资源并提早保释,请这样做。 对于此博客,如果在我的缓存中找不到条目(基于URL中的GUID),那么我现在甚至都不会在XML数据库中查找。 另外,我将发送404,使用Response.SupressContent并结束响应。

    If some condition can allow you to avoid accesses to expensive resources and bail early, do. For this blog, if an entry isn't found (based on the GUID in the URL) in my cache, now I won't even look in the XML database. Additionally, I'll send a 404, use Response.SupressContent and End the Response.

    如果某些情况可以使您避免访问昂贵的资源并提早保释,请这样做。 对于此博客,如果在我的缓存中找不到条目(基于URL中的GUID),那么我现在甚至都不会在XML数据库中查找。 此外,我将发送404,使用Response.SupressContent并结束响应。

        1 if (WeblogEntryId.Length == 0) //example condition

    2 {
    3 Response.StatusCode = 404;
    4 Response.SuppressContent = true;
    5 Response.End();
    6 return null; //save us all the time
    7 }
  • Lock things only as long as needed and be smart about threading/locking.

    仅在需要时锁定事物,并且对线程/锁定要精明。

  • If you're serving content, caching even for a few seconds or a minute can save you time. Not caching is just wasting time. Certainly if I update a post, I can wait 60 seconds or so before it's seen updated on the site. However, if a post is hit hard, either by slashdot'ing or a DoS attack, caching for a minute will save mucho CPU.

    如果您要提供内容,即使缓存几秒钟或一分钟也可以节省您的时间。 不缓存只是浪费时间。 当然,如果我更新帖子,则可以等待60秒钟左右,然后才能在网站上看到它的更新。 但是,如果通过斜杠或DoS攻击使帖子受到重创,则缓存一分钟将节省大量CPU。

    If you're serving content, caching even for a few seconds or a minute can save you time. Not caching is just wasting time. Certainly if I update a post, I can wait 60 seconds or so before it's seen updated on the site. However, if a post is hit hard, either by slashdot'ing or a DoS attack, caching for a minute will save mucho CPU.<%@ OutputCache Duration="60" VaryByParam="*" %>at the top of one of my pages will cache the page for a minute using all combinations of URL parameters. To be thorough, but use more memory, one would add VaryByHeader for Accept-Languages and Accept-Encoding, but this is handled in my base page.

    如果您要提供内容,即使缓存几秒钟或一分钟也可以节省您的时间。 不缓存只是浪费时间。 当然,如果我更新帖子,则可以等待60秒钟左右,然后才能在网站上看到它的更新。 但是,如果通过slashdot'ing或DoS攻击严重破坏帖子,则缓存一分钟将节省大量CPU。 <%@ OutputCache持续时间=“ 60” VaryByParam =“ *”%> 在我的页面之一的顶部,将使用所有URL参数组合将页面缓存一分钟。 为了更全面,但要使用更多的内存,可以为接受语言和接受编码添加VaryByHeader,但这在我的基本页面中进行了处理。

翻译自: https://www.hanselman.com/blog/aspnet-performance-tuning-making-dasblog-faster

 类似资料: