Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High Memory usage with my trackee channels #691

Closed
shoaibahmedqureshi opened this issue Feb 14, 2018 · 65 comments
Closed

High Memory usage with my trackee channels #691

shoaibahmedqureshi opened this issue Feb 14, 2018 · 65 comments
Assignees
Labels
bug Something isn't working. It's clear that this does need to be fixed.

Comments

@shoaibahmedqureshi
Copy link

shoaibahmedqureshi commented Feb 14, 2018

We are experiencing high memory usage currently with our application channels and we are not sure about the reason and sometimes it takes more than a GB which is not good for a mobile device . I have simulated this issue with our own channels in a separate project and it has been shared with @funkyboy as project contains our channels so we are not keeping it public on GitHub repository.
We suspect that #673 could be because of this issue.
When memory goes high often we get the message "Count response is greater then the total of pending message."
Feel free to further communicate for this project.
Following is the project url https://drive.google.com/drive/folders/1oYMa0wakNKXfSH7M8gAzPX_2EnShcdTH

message

screen shot 2018-02-15 at 3 19 23 am

Uploading Screen Shot 2018-02-15 at 1.56.50 AM.png…

Screen Shot 2018-02-15 at 1.56.50 AM

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi thanks for reporting this. I asked access to the project URL. Once granted I'll take a look.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi I saw you sent a Google Drive link (above) and then shared some other link via email. Please let me know which project I should check out.
Also, since I'll debug with data that belong to your channels, I am gonna need the API key that you used to reproduce this. You have my email.

@shoaibahmedqureshi
Copy link
Author

shoaibahmedqureshi commented Feb 15, 2018

@funkyboy both point to the same code and if they do not please let me know and also I am emailing you the API Key .

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi received. What are the steps to reproduce a 1GB memory situation?

@shoaibahmedqureshi
Copy link
Author

shoaibahmedqureshi commented Feb 15, 2018

@funkyboy just run the code and wait for few mins , you will see memory increasing if channels are subscribed and messages are coming and if it does not reproduce then try rerunning the code 2 3 times and it should reproduce.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi The memory grows because the tick function is called every .3 seconds to send fake data to the channels. I put it there initially just to simulate a quite heavy workload. Hence the messages array will grow indefinitely. I don't think it's a real use case :)

@hrubbani
Copy link

@funkyboy
where is this tick() function that sends the data ? is it in your sample project code ?

we are experiencing this issue in live app, when it is start listening all demobus channels, memory starting increasing and after sometime app hangs. App doesn't keep data, it just moves marker on map with every update.

memory starts keep increasing when we tried those channels in your provided project. On these channels data comes after every second or so.

@funkyboy
Copy link
Contributor

@hrubbani It's in the project you sent me. See screenshot.
screen shot 2018-02-16 at 12 47 10

@hrubbani
Copy link

thanks @funkyboy
@shoaibahmedqureshi can you remove this code and try again to see if that keeps down the memory ? or clarify in case if this is not relevant after your changes

@shoaibahmedqureshi
Copy link
Author

@funkyboy I removed tick function and introduced stateChange method which tracks channel state change and reattaches any failed or detached channels and I can see memory going up . I shared the link with you please let me know if you did not receive it.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi I received it.
I could not get the app to reach 1GB of RAM consumption, but after running for 1h20m I got it to ~650MB. Yet I don't understand the concern.

Every new message (regardless of the channel) is stored in messages, line 67. That is an in-memory array that will grow as new messages are received. Thus RAM consumption will grow.
I don't fully know the scope of your app, but it's likely that:

  • you'll store messages somewhere else (Core Data, file)
  • you'll get rid of old messages
  • some other custom logic

That's what I mean when I say that this project, as it is implemented right now, is not a "real use case".
Or am I misunderstanding your report?

Nevertheless, I'll test this again next week - without storing data in messages - to see what happens.

@shoaibahmedqureshi
Copy link
Author

@funkyboy I am wondering why it is not reproducing in your case and even if you comment out the code to append in the message array. It often goes in GBs , I am attaching an screenshot which I just reproduced the issue and code to append message array is also commented out. We suspect that after receiving messages memory is somehow not releasing and it happens before we save messages in memory . please refer to screenshot below.

screen shot 2018-02-17 at 1 45 45 am

@shoaibahmedqureshi
Copy link
Author

Please also refer to screen shot below.
screen shot 2018-02-17 at 1 57 14 am

@funkyboy
Copy link
Contributor

funkyboy commented Feb 19, 2018

@shoaibahmedqureshi ok, let's see how else we can tackle this.

First I'd make sure to we run the app on the same Xcode version and simulator.
I am running Xcode 9.2 (9C40b) and I tested it on a iOS 11.2 iPhone 8 Plus simulator.
I think you are using the same simulator, but is the version of Xcode the same? If not, any chance you can test it with the 9.2 version?

Also, is the content on channels "constantly updating"? If not, we need to coordinate about this.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi
I removed the startPublishing function and I see zero traffic on the channels.

@shoaibahmedqureshi
Copy link
Author

@funkyboy i checked on 9.0 and 9.2 both and i am able to reproduce this . Please make sure if you are not receiving data , you are using latest project i gave you and latest one contains channelStateChange method and even then if you have trouble reproducing it we can jump on call.
@hrubbani can you please clarify his point "is the content on channels "constantly updating"?"

screen shot 2018-02-19 at 6 05 22 pm

@hrubbani
Copy link

Longitude , latitude and timestamp changes in every message. Some other fields randomly change. So every message content is different.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi you sent me 2-3 projects and I don't understand anymore which one you'd like me to run. I am online on Skype. Please send it over there.

@shoaibahmedqureshi
Copy link
Author

shoaibahmedqureshi commented Feb 19, 2018

@funkyboy I gave you two projects, the latest one is AblyTest-New.zip. I am available on skype .

@funkyboy
Copy link
Contributor

For future reference: tap the refresh button to trigger the reception of messages.

@shoaibahmedqureshi
Copy link
Author

@funkyboy somehow I am unable to send video over skype so let me try and upload here.

@shoaibahmedqureshi
Copy link
Author

let me try and upload it somewhere and send you the link as can't upload it here either due to format issue.

@funkyboy
Copy link
Contributor

For future reference video is here: https://www.youtube.com/watch?v=PSBPgbd6_UQ

@funkyboy
Copy link
Contributor

Further inspection suggests that setting a low log level like:

options?.logLevel = .warn or options!.logLevel = .none

brings the memory consumption down quite a lot.
@shoaibahmedqureshi will confirm this with further tests.

@shoaibahmedqureshi
Copy link
Author

@funkyboy I performed further tests on the same project given by you to me over skype in yesterday's session in which we performed tests using different log levels and further inspection suggests that keeping log level like options!.logLevel = .none does not keep the issue away from reproducing please refer to the video https://www.youtube.com/watch?v=zn0yQIkmQW4 .
Also notice in the video that Network shows few KBs while memory grows in several MBs and as per @hrubbani our message is of few bytes only. Clearly in the project we are not saving message anywhere . The video shows that memory only came down significantly when channels failed and went up again upon refresh. Please investigate further as the test in the video suggests that even changing level does not make memory stable.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi thanks for your report. We are going to take a look at this. As it's difficult to replicate, it might take a while.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi One more thing. Can you delete this code?

case .failed?:     
  channel?.subscribe() { message in

  }
  break

When a channel is in a failed state, trying to subscribe is pointless. It is gonna error for sure.

@shoaibahmedqureshi
Copy link
Author

@funkyboy done.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi Ok, that should also prevent some memory growth. Can you please test and let me know?

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi I sent you a new project via email. Can you please test it and let me know?

@shoaibahmedqureshi
Copy link
Author

shoaibahmedqureshi commented Feb 27, 2018

@funkyboy I had tested it earlier after removing
case .failed?:
channel?.subscribe() { message in

}
break
and it turns out that memory hike process was not as quick as it used to be but with each message coming it goes little bit up and never comes down and it started from 50 MB and after sometime (approx and hour or so ) It reached 300 MB and I did not witness any drop during this process and with this continued hike it might cross 1 GB in some hours What I am noticing is that once message is received and in test project and that message is not being used any where the memory is not releasing . To reproduce you need run application and leave it running in foreground and ensure that messages keep coming. With that latest project you sent me why have you commented out the rest of channels except one channe ?

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi

With that latest project you sent me why have you commented out the rest of channels except one channe ?

That was for testing. Please reactivate them all, so we can test it the same condition.

The last project I sent you via email should use even less memory. Let me know.

@shoaibahmedqureshi
Copy link
Author

@funkyboy I will test it and let you know . I see there is no way set log levels in that project and all ? is it pointing to some special commit/branch ? if so can you provide us the pod link as well so we can test it in our actual project as well ?

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi that’s exactly the point of the project I sent you. Remove any kind of logging and see the impact.

At the moment I don’t have yet a branch that excludes logging.

@shoaibahmedqureshi
Copy link
Author

shoaibahmedqureshi commented Feb 28, 2018

@funkyboy I tested your test project given by you and memory seemed improved and under control and if you could please create a pod link for it . I would further test it in our actual application.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 1, 2018

@shoaibahmedqureshi the "workaround" is to comment out the content of this function which you'll find in the Pods/Ably/Source folder of your original project.

I'd like to point out that memory tests so far have been performed in debug mode, which does NOT reflect the behaviour of an app in production.

I'll soon post a public project that we can use as a test bed to measure memory consumption.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 1, 2018

@shoaibahmedqureshi as agreed via email I uploaded the sample project, but anonymized.
See the README file for instructions on measuring memory consumption.

I ran a bunch of tests on that project but I could NOT reproduce an ever growing memory consumption.
Here's a screenshot for reference. Average consumption was around 20MB.

screen shot 2018-03-01 at 14 05 48

Feel free to branch the project and push changes that you think are relevant to reproduce the issue.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 1, 2018

@shoaibahmedqureshi While testing the sample project on a device using a 3G connection I noticed some excessive memory consumption (although not ever growing). So I tweaked the logger.

Please follow the Installation steps in the README file to pull the new branch and run new tests.

Let me know.

@shoaibahmedqureshi
Copy link
Author

@funkyboy With the workaround (commenting out method) kept memory quite low and in control , yet the testbed you shared and by pulling tweaked logger fix (without commenting out method) I could see it going above 1 GB and and I had set the log level .none. I did not use profiler yet. I tested it on simulator and @hrubbani is further testing (tweaked logger fix) on device in our actual app. Just add the key and add all those 9 channels with had in previous test project and the only change I introduced was channelStateChange method which too is there in previous project and even I was able to reproduce it without introducing this method too ,but I am still committing it for further clarity. I was able to reproduce above 1 GB memory 3 to 4 times.

screen shot 2018-03-06 at 5 57 08 am

@shoaibahmedqureshi
Copy link
Author

For some reason I am unable to push the code but the method is there in the previous project and I was able to reproduce it without that as well.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 6, 2018

@shoaibahmedqureshi If you are referring to this project you can't push to master but you can create a branch and a PR so I can see the diff.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 6, 2018

@shoaibahmedqureshi @hrubbani while inspecting another issue I found a bug in our logging code.
I pushed an update.

Please retry testing the test bed project by following the instructions in the README.

Feel free also to pull the branch into your original project and see how it goes.

@shoaibahmedqureshi
Copy link
Author

@funkyboy from the initial testing the memory looks improved on log level .none , however we are further testing it and will share the details in case of any issues.

@funkyboy
Copy link
Contributor

funkyboy commented Mar 8, 2018

@shoaibahmedqureshi thanks for the update. I invite you to test also the .error level, which is presumably the value you'll use in production.

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi any news on this? We will make a new release soon with these changes

@hrubbani
Copy link

@funkyboy so far so good, app looks much better after the latest changes, no more hangs, or freezes. We will let you know if we found any thing unusual in our testing. For now, these can be released, from our side.

@AndrewBarba
Copy link

Any chance this gets merged today? We just got hit really hard with this, thousands of our users experienced crashing during a live event we just ran

@mattheworiordan
Copy link
Member

Any chance this gets merged today? We just got hit really hard with this, thousands of our users experienced crashing during a live event we just ran

@AndrewBarba how do you know this issue is related to the problems you are having? Can you provide some more detail on what problem you are having?

@AndrewBarba
Copy link

Because of the OOM crashes I saw in Fabric during our live event

@AndrewBarba
Copy link

@mattheworiordan @funkyboy @hrubbani

Here's memory usage for current sdk:

screen shot 2018-03-15 at 3 54 31 pm

screen shot 2018-03-15 at 3 54 46 pm

@AndrewBarba
Copy link

fix-memory-consumption branch:

screen shot 2018-03-15 at 4 00 25 pm

screen shot 2018-03-15 at 4 00 30 pm

@funkyboy
Copy link
Contributor

@shoaibahmedqureshi @AndrewBarba @hrubbani This is now fixed in 1.0.12

@AndrewBarba
Copy link

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working. It's clear that this does need to be fixed.
Development

No branches or pull requests

5 participants