Skip to content

Is it possible to compute the full inference at the same time as the activations? #97

@phitoduck

Description

@phitoduck

Hi there,

We're attempting to use tf-keras-vis "in production". We want to create a REST API that takes in an image and returns two things at once:

  1. a float (a "score") that is the output of the model after passing the image
  2. an image (a "heatmap") that is a visualization of the activations used to produce (1)

Currently, we have to do the inference two times: once to compute (1) and once to compute (2). Inference is slow, so we'd like to cut our latency in half by computing (1) and (2) in the same pass through the model.

Is this possible?

I was considering making a fork of this repo. I saw that your Scorecam class uses the method described in this SO answer to create a new keras.Model object with all but the last layer of the model used in Scorecam.

Would it be possible to use a similar method to create yet another keras.Model that only has the last layer of our model? That way, we could

  1. compute the activations ("heatmap"): pass an image into the model with all but the last layer
  2. compute the "full inference" ("score"): by passing the activations into the model with only the last layer

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions