-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Returning list of op names in get_graph_ops and extended error atoms to all TF error codes #6
Conversation
c_src/Tensorflex.c
Outdated
break; | ||
case TF_DATA_LOSS: strcpy(error,"data_loss"); | ||
break; | ||
default: strcpy(error,"unlisted_code"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could have each of those call enif_make_atom
so we don't have to allocate the string in the first place, just to convert it to an atom?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this in the latest commit. I was needlessly allocating a string for this. Renamed the function to error_to_atom
now.
c_src/Tensorflex.c
Outdated
char op_name[BASE_STRING_LENGTH]; | ||
for(int i=0; i<n_ops; i++) { | ||
op_temp = TF_GraphNextOperation(*graph, &pos); | ||
strcpy(op_name, (char*) TF_OperationName(op_temp)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to copy the string here? Given that we are already copying it later to the binary with memcpy
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this wasn't needed and I have fixed this in the latest commit. I apologize for the poorly written code here.
Comments added. Other than that, it looks great to go! 👍 What next steps do you have in mind? :) |
Thanks for the feedback @josevalim! I have been thinking about the next steps, and the logical progression of adding functions should eventually get to the point where we can run the loaded graph against our own inputs and generate prediction outputs. For this to work, a lot of functions need to be added first: particularly ones that will allow us to read Tensors supplied by the user. I am thinking over how I would accomplish this and then I will submit a PR for review. These TF_Tensor functions are very important and they would have to encompass a wide variety of inputs. I might also keep importing graph based functions from Tensorflow as and when required (like the operation list function |
@anshuman23 agreed! ❤️ Let's go with this plan and continue exploring the API and the "user story" but note that at some point we will have to "sit down" and write documentation and tests. Also, can you please send an e-mail to the beamcommunity mailing list with your progress so far and your plans for the next week? What is coming next can be a very quick digest, like this one that you just sent! |
Thanks @josevalim, and I completely agree on taking time out to write documentation and tests. I will incorporate that into my plans as well. :) Yes, I'll send that email right away. Thanks again for all the help! |
So this PR covers the changes you had last requested @josevalim. Other than that I have just removed some unused code from before. Also ensured I am using
enif_make_binnary
wherever needed instead ofenif_make_string
:error_to_string
handles that now and an example of a faulty graph reading attempt would look like this:get_graph_ops
which returns a List of strings (op names in the graph). An example is as follows: