-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
挂载夸克,删除大量文件时,内存会爆 #7088
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! |
to fix your trouble try download this fix, i see it in another issue, |
再次反馈:我从v3.36.0退回到v3.25.1后,意外发现一切都正常了,docker限制内存只给700M,实际使用中最大也没超过350M,配置没有做任何改动 |
再再次反馈:v3.25.1虽然内存正常,但发现夸克看视频缓冲要2,3秒,可能是驱动太老了,于是从最新的v3.36.0一直往回试,发现直到v3.34.0时内存正常了,夸克速度也正常了,目前的配置是缓存过期时间1分钟,内存基本不会超过300M
|
根据上文,我在稍微查阅代码后推测可能为:
|
我正在本地尝试复现问题,使用的环境:Linux amd64,k3s 使用 DD 生成 2M 大小文件 100 个,从 local 复制到 Quark,六分钟后执行删除 |
感谢回复,可能还是环境不同的原因,我这里实测v3.36.0,随便刷几下,2分钟不到就可以刷爆(docker限制700M),但在v3.34.0下确实没问题,我再详细说一下我的复现步骤: 1.系统是debian11,alist安装的docker版本,安装命令如下:
再限制一下内存为700M
2.安装完成后,挂载夸克网盘,挂载路径/quark,本地代理,填上cookie,其他全部默认 3.夸克网盘随便找个电视剧转存一下,比如凡人修仙传,有100集 4.手机上安装流舟文件app(免费的),用webdav的方式挂载alist(我这直接用admin用户),打开/quark,找到电视剧,点进去,然后下拉刷新,此时会开始生成缩略图,观察alist内存,会发现一直上涨,且基本不会回收,如果内存没爆,就关闭流舟app,重复刚才的操作,基本上不到2分钟内存就会到达700M然后爆了,alist进程会自动重启变成几十M。 |
Alist 本身只在本地存储支持在本地生成缩略图,如果这个是流舟文件 app 的功能,那这很可能就是内存占用过高的主要原因 |
@Mmx233 其实其他操作也会导致内存爆,我只是那样举例哈,比如100个文件同时选中再删除也会爆,在v3.34.0下就都正常 |
鉴于我仍然无法在本地复现,请你在本地执行调试:
|
根据日志,80% 的内存都被 WebDav 的 HttpServer 的 net.NewBuffer 消耗 试试 |
@Mmx233 刚试了,还是有泄漏 |
内存高占用和内存超过五分钟不会回收是两个分开的问题,请确认两个问题都还存在 |
看这个描述,应该是
|
抱歉,前面提供的 image tag 复制错了。有修复补丁的 image 应是 |
@PHCSJC 试了mmx233/alist:v3.36.0-gamma2这个版本,正常了,和v3.34.0几乎完全一样,内存不会持续暴涨,也会自动回收到50M以下 |
峰值内存占用也很低吗,我看见 Server Buffer 的 size 和重复利用还有可优化的空间 |
@Mmx233 峰值没超过300M |
* chore(webdav): fix warnings in HttpServe * fix(webdav): HttpServe memory leak
Please make sure of the following things
I have read the documentation.
我已经阅读了文档。
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
I'm sure it's due to
AList
and not something else(such as Network ,Dependencies
orOperational
).我确定是
AList
的问题,而不是其他原因(例如网络,依赖
或操作
)。I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。
AList Version / AList 版本
v3.36.0
Driver used / 使用的存储驱动
夸克
Describe the bug / 问题描述
挂载夸克,删除大量文件时,内存会爆,会占用到2G+的内存,直至主机死机
Reproduction / 复现链接
夸克网盘上传或转存100+个小文件,随便找个webdav程序挂载alist,然后全部选中,删除,观察内存变化,即使没爆,增加的内存也不会回收,下次再操作又上涨,直至爆了
Config / 配置
{
"force": false,
"site_url": "",
"cdn": "",
"jwt_secret": "aaaaaaaa",
"token_expires_in": 48,
"database": {
"type": "sqlite3",
"host": "",
"port": 0,
"user": "",
"password": "",
"name": "",
"db_file": "data/data.db",
"table_prefix": "x_",
"ssl_mode": "",
"dsn": ""
},
"meilisearch": {
"host": "http://localhost:7700",
"api_key": "",
"index_prefix": ""
},
"scheme": {
"address": "0.0.0.0",
"http_port": 5244,
"https_port": -1,
"force_https": false,
"cert_file": "",
"key_file": "",
"unix_file": "",
"unix_file_perm": ""
},
"temp_dir": "data/temp",
"bleve_dir": "data/bleve",
"dist_dir": "",
"log": {
"enable": true,
"name": "data/log/log.log",
"max_size": 50,
"max_backups": 30,
"max_age": 28,
"compress": false
},
"delayed_start": 0,
"max_connections": 0,
"tls_insecure_skip_verify": true,
"tasks": {
"download": {
"workers": 5,
"max_retry": 1
},
"transfer": {
"workers": 5,
"max_retry": 2
},
"upload": {
"workers": 5,
"max_retry": 0
},
"copy": {
"workers": 5,
"max_retry": 2
}
},
"cors": {
"allow_origins": [
""
],
"allow_methods": [
""
],
"allow_headers": [
"*"
]
},
"s3": {
"enable": false,
"port": 5246,
"ssl": false
}
Logs / 日志
No response
The text was updated successfully, but these errors were encountered: