当前位置: 首页 > 知识库问答 >
问题:

使用uvicorn运行时,Fastapi无法找到模型定义

蒋嘉颖
2023-03-14

我想在fastapi后端托管一个pytorch模型。当我用python运行代码时,它运行得很好。depickle模型可以使用定义的类。当使用uvicorn启动同一文件时,它找不到类定义。

源代码如下所示:

import uvicorn
import json
from typing import List
from fastapi import Body, FastAPI
from fastapi.encoders import jsonable_encoder
import requests
from pydantic import BaseModel

#from model_ii import Model_II_b

import dill as pickle
import torch as T
import sys

app = FastAPI()
current_model = 'model_v2b_c2_small_ep15.pkl'
verbose_model = False  # for model v2

class Model_II_b(T.nn.Module):
[...]
@app.post('/function')
def API_call(req_json: dict = Body(...)):
    try:
        # load model...
        model = pickle.load(open('models/' + current_model, 'rb'))
        result = model.dosomething_with(req_json)

        return result

    except Exception as e:
        raise e
        return {"error" : str(e)}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

当我用python main运行它时。py它工作正常,我正在得到结果。当我使用uvicorn main:app运行它并发送请求时,我得到以下错误:

AttributeError: Can't get attribute 'Model_II_b' on <module '__mp_main__' from '/opt/webapp/env/bin/uvicorn'>

两者都应该使用相同的python环境,就像我在环境中使用uvicorn一样。

我希望有人知道我的设置或代码有什么问题。

更新堆栈跟踪:

(model_2) root@machinelearning-01:/opt/apps# uvicorn main:app --env-file /opt/apps/env/pyvenv.cfg --reload
INFO:     Loading environment from '/opt/apps/env/pyvenv.cfg'
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [164777] using statreload
INFO:     Started server process [164779]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     127.0.0.1:33872 - "POST /ml/v2/predict HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/opt/apps/env/lib/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/opt/apps/env/lib/python3.6/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/opt/apps/env/lib/python3.6/site-packages/fastapi/applications.py", line 183, in __call__
    await super().__call__(scope, receive, send)  # pragma: no cover
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/applications.py", line 102, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 550, in __call__
    await route.handle(scope, receive, send)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 227, in handle
    await self.app(scope, receive, send)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/routing.py", line 41, in app
    response = await func(request)
  File "/opt/apps/env/lib/python3.6/site-packages/fastapi/routing.py", line 197, in app
    dependant=dependant, values=values, is_coroutine=is_coroutine
  File "/opt/apps/env/lib/python3.6/site-packages/fastapi/routing.py", line 149, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/opt/apps/env/lib/python3.6/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
    return await loop.run_in_executor(None, func, *args)
  File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
    result = self.fn(*self.args, **self.kwargs)
  File "./main.py", line 155, in API_call
    raise e
  File "./main.py", line 129, in API_call
    model = pickle.load(open('models/' + current_model, 'rb'))
  File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 270, in load
    return Unpickler(file, ignore=ignore, **kwds).load()
  File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 473, in load
    obj = StockUnpickler.load(self)
  File "/opt/apps/env/lib/python3.6/site-packages/dill/_dill.py", line 463, in find_class
    return StockUnpickler.find_class(self, module, name)
AttributeError: Can't get attribute 'Model_II_b' on <module '__mp_main__' from '/opt/apps/env/bin/uvicorn'>
enter code here

共有1个答案

籍利
2023-03-14

在@lsabi的帮助下,我在这里找到了解决方案https://stackoverflow.com/a/51397373/13947506

有了自定义开料机,我的问题得到了解决:

class CustomUnpickler(pickle.Unpickler):

    def find_class(self, module, name):
        if name == 'Model_II_b':
            from model_ii_b import Model_II_b
            return Model_II_b
        return super().find_class(module, name)

current_model = 'model_v2b_c2_small_ep24.pkl'

model = CustomUnpickler(open('models/' + current_model, 'rb')).load()
 类似资料:
  • 我正在尝试(失败)设置一个简单的FastAPI项目,并使用uvicorn运行它。这是我的代码: 这是我从终端运行的内容: 如你所见,我找不到404。原因可能是什么?一些与网络相关的东西,可能是防火墙/vpn阻止此连接或其他什么?我是新来的。提前谢谢!

  • 我创建了一个个人使用的基本应用程序。我的应用程序的支持使用快速Api和SQLite数据库。通常要运行我的启动和运行我的后端服务器,我必须使用以下命令: 我以前见过其他人创建python可执行文件。我也想这样做,但我需要它来启动uvicorn服务器。如何创建运行uvicorn服务器的python可执行文件? 还是只编写一个执行此操作的批处理脚本更好?

  • Supported tags and respective Dockerfile links python3.9, latest (Dockerfile) python3.8, (Dockerfile) python3.7, (Dockerfile) python3.6 (Dockerfile) python3.9-slim (Dockerfile) python3.8-slim (Dockerf

  • 我试图在谷歌可乐上运行一个“本地”网络应用程序,使用FastAPI/Uvicorn,就像我见过的一些烧瓶应用程序示例代码一样,但无法让它工作。有人能做到这一点吗?感谢它。

  • 问题内容: 在Windows 7上,我已按以下说明安装了gulp:http : //markgoodyear.com/2014/01/getting-started-with- gulp/ : 在我的应用文件夹中: 我创建一个文件。 但是,当我尝试运行时,出现以下错误消息: 等等 但是存在于(在本地应用程序文件夹中): 知道可能是什么原因吗? 问题答案: 更新 从更高版本开始,无需手动安装gulp

  • 我承认我以前从未使用过。当我运行命令给出错误:

  • 问题内容: 我一直在尝试为正在建立的这个网站加载一些模型。但是,由于未知的原因,它将带来以下错误: 现在,我已经完成了研究。问题在于IC会以小写形式处理文件名。但是,我的文件和文件调用都是小写的,如下所示: 执行 未 达到“ FOUND MODEL”,因此在加载模型时停止。我尝试使用: 没有结果。需要提及将模型文件正确放置在正确的模型文件夹中吗? 我怎样才能解决这个问题 ? 编辑:模型文件的标题:

  • 我有一个本地运行的服务器。当我在AWS EC2上运行它并在8000端口上从外部发送请求时,我得到以下错误: 如果您能告诉我如何在端口80上执行此操作,那将非常好。