Skip to content

Commit

Permalink
Update comment and clear out few things
Browse files Browse the repository at this point in the history
  • Loading branch information
bensonarafat committed Aug 10, 2024
1 parent fa406f2 commit df0e188
Show file tree
Hide file tree
Showing 6 changed files with 145 additions and 27 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Google's ML Kit for Flutter is a set of [Flutter plugins](https://flutter.io/pla
| [Digital Ink Recognition](https://developers.google.com/ml-kit/vision/digital-ink-recognition) | [google\_mlkit\_digital\_ink\_recognition](https://pub.dev/packages/google_mlkit_digital_ink_recognition) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_digital_ink_recognition)](https://pub.dev/packages/google_mlkit_digital_ink_recognition) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_digital_ink_recognition) |||
| [Pose Detection](https://developers.google.com/ml-kit/vision/pose-detection) | [google\_mlkit\_pose\_detection](https://pub.dev/packages/google_mlkit_pose_detection) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_pose_detection)](https://pub.dev/packages/google_mlkit_pose_detection) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_pose_detection) |||
| [Selfie Segmentation](https://developers.google.com/ml-kit/vision/selfie-segmentation) | [google\_mlkit\_selfie\_segmentation](https://pub.dev/packages/google_mlkit_selfie_segmentation) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_selfie_segmentation)](https://pub.dev/packages/google_mlkit_selfie_segmentation) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_selfie_segmentation) |||
| [Subject Segmentation](https://developers.google.com/ml-kit/vision/subject-segmentation) | [google\_mlkit\_subject\_segemtation](https://pub.dev/packages/google_mlkit_subject_segemtation) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_subject_segemtation)](https://pub.dev/packages/google_mlkit_subject_segemtation) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_subject_segemtation) | ||
| [Subject Segmentation](https://developers.google.com/ml-kit/vision/subject-segmentation) | [google\_mlkit\_subject\_segemtation](https://pub.dev/packages/google_mlkit_subject_segemtation) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_subject_segemtation)](https://pub.dev/packages/google_mlkit_subject_segemtation) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_subject_segemtation) | ||
| [Document Scanner](https://developers.google.com/ml-kit/vision/doc-scanner) | [google\_mlkit\_document\_scanner](https://pub.dev/packages/google_mlkit_document_scanner) [![Pub Version](https://img.shields.io/pub/v/google_mlkit_document_scanner)](https://pub.dev/packages/google_mlkit_document_scanner) | [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/google_mlkit_document_scanner) |||

### Natural Language APIs
Expand Down
2 changes: 1 addition & 1 deletion packages/google_mlkit_subject_segmentation/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## 0.0.1

* TODO: Describe initial release.
* Initial release.
22 changes: 21 additions & 1 deletion packages/google_mlkit_subject_segmentation/LICENSE
Original file line number Diff line number Diff line change
@@ -1 +1,21 @@
TODO: Add your license here.
MIT License

Copyright (c) 2022 Francisco Bernal, Bharat Biradar and Benson Arafat.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
101 changes: 91 additions & 10 deletions packages/google_mlkit_subject_segmentation/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,96 @@
# google_mlkit_subjection_segmentation
# Google's ML Kit Subject Segmentation for Flutter

A new Flutter plugin project.
[![Pub Version](https://img.shields.io/pub/v/google_mlkit_subject_segmentation)](https://pub.dev/packages/google_mlkit_subject_segmentation)
[![analysis](https://github.com/flutter-ml/google_ml_kit_flutter/actions/workflows/code-analysis.yml/badge.svg)](https://github.com/flutter-ml/google_ml_kit_flutter/actions)
[![Star on Github](https://img.shields.io/github/stars/flutter-ml/google_ml_kit_flutter.svg?style=flat&logo=github&colorB=deeppink&label=stars)](https://github.com/flutter-ml/google_ml_kit_flutter)
[![License: MIT](https://img.shields.io/badge/license-MIT-purple.svg)](https://opensource.org/licenses/MIT)

## Getting Started
> ***NOTE: This feature is still in Beta, and it is only available for Android. Stay tune for updates in [Google's website](https://developers.google.com/ml-kit/vision/subject-segmentation) and request the feature [here](https://github.com/googlesamples/mlkit/issues).***
This project is a starting point for a Flutter
[plug-in package](https://flutter.dev/developing-packages/),
a specialized package that includes platform-specific implementation code for
Android and/or iOS.
A Flutter plugin to use [Google's ML Kit Subject Segmentation](https://developers.google.com/ml-kit/vision/subject-segmentation) to easily separate multiple subjects from the background in a picture, enabling use cases such as sticker creation, background swap, or adding cool effects to subjects.

For help getting started with Flutter development, view the
[online documentation](https://flutter.dev/docs), which offers tutorials,
samples, guidance on mobile development, and a full API reference.
Subjects are defined as the most prominent people, pets, or objects in the foreground of the image. If 2 subjects are very close or touching each other, they are considered a single subject.

Each pixel of the mask is assigned a float number that has a range between 0.0 and 1.0. The closer the number is to 1.0, the higher the confidence that the pixel represents a subject, and vice versa

On average the latency measured on Pixel 7 Pro is around 200 ms. This API currently only supports static images.

Key capabilities

- Multi-subject segmentation: provides masks and bitmaps for each individual subject, rather than a single mask and bitmap for all subjects combined.
- Subject recognition: subjects recognized are objects, pets, and humans.
- On-device processing: all processing is performed on the device, preserving user privacy and requiring no network connectivity.


**PLEASE READ THIS** before continuing or posting a [new issue](https://github.com/flutter-ml/google_ml_kit_flutter/issues):

- [Google's ML Kit](https://developers.google.com/ml-kit) was build only for mobile platforms: iOS and Android apps. Web or any other platform is not supported, you can request support for those platform to Google in [their repo](https://github.com/googlesamples/mlkit/issues).

- This plugin is not sponsored or maintained by Google. The [authors](https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/AUTHORS) are developers excited about Machine Learning that wanted to expose Google's native APIs to Flutter.

- Google's ML Kit APIs are only developed natively for iOS and Android. This plugin uses Flutter Platform Channels as explained [here](https://docs.flutter.dev/development/platform-integration/platform-channels).

Messages are passed between the client (the app/plugin) and host (platform) using platform channels as illustrated in this diagram:

<p align="center" width="100%">
<img src="https://docs.flutter.dev/assets/images/docs/PlatformChannels.png">
</p>

Messages and responses are passed asynchronously, to ensure the user interface remains responsive. To read more about platform channels go [here](https://docs.flutter.dev/development/platform-integration/platform-channels).

Because this plugin uses platform channels, no Machine Learning processing is done in Flutter/Dart, all the calls are passed to the native platform using `MethodChannel` in Android and `FlutterMethodChannel` in iOS, and executed using Google's native APIs. Think of this plugin as a bridge between your app and Google's native ML Kit APIs. This plugin only passes the call to the native API and the processing is done by Google's API. It is important that you understand this concept when it comes to debugging errors for your ML model and/or app.

- Since the plugin uses platform channels, you may encounter issues with the native API. Before submitting a new issue, identify the source of the issue. You can run both iOS and/or Android native [example apps by Google](https://github.com/googlesamples/mlkit) and make sure that the issue is not reproducible with their native examples. If you can reproduce the issue in their apps then report the issue to Google. The [authors](https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/AUTHORS) do not have access to the source code of their native APIs, so you need to report the issue to them. If you find that their example apps are okay and still you have an issue using this plugin, then look at our [closed and open issues](https://github.com/flutter-ml/google_ml_kit_flutter/issues). If you cannot find anything that can help you then report the issue and provide enough details. Be patient, someone from the community will eventually help you.

## Requirements

### iOS

This feature is still in Beta, and it is only available for Android. Stay tune for updates in [Google's website](https://developers.google.com/ml-kit/vision/subject-segmentation) and request the feature [here](https://github.com/googlesamples/mlkit/issues).

### Android

- minSdkVersion: 24
- targetSdkVersion: 33
- compileSdkVersion: 34

## Usage

### Subject Segmentation

#### Create an instance of `InputImage`

Create an instance of `InputImage` as explained [here](https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/packages/google_mlkit_commons#creating-an-inputimage).

```dart
final InputImage inputImage;
```

#### Create an instance of `SubjectSegmenter`

```dart
final segmenter = SubjectSegmenter();
```

#### Process image

```dart
final mask = await segmenter.processImage(inputImage);
```

#### Release resources with `close()`

```dart
segmenter.close();
```

## Example app

Find the example app [here](https://github.com/flutter-ml/google_ml_kit_flutter/tree/master/packages/example).

## Contributing

Contributions are welcome.
In case of any problems look at [existing issues](https://github.com/flutter-ml/google_ml_kit_flutter/issues), if you cannot find anything related to your problem then open an issue.
Create an issue before opening a [pull request](https://github.com/flutter-ml/google_ml_kit_flutter/pulls) for non trivial fixes.
In case of trivial fixes open a [pull request](https://github.com/flutter-ml/google_ml_kit_flutter/pulls) directly.
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,4 @@ android {

dependencies {
implementation 'com.google.android.gms:play-services-mlkit-subject-segmentation:16.0.0-beta1'
//implementation files('/Users/bensonarafat/development/flutter/bin/cache/artifacts/engine/android-arm/flutter.jar')
}
Original file line number Diff line number Diff line change
Expand Up @@ -3,37 +3,51 @@ import 'package:google_mlkit_commons/google_mlkit_commons.dart';

/// A detector that performs segmentation on a given [InputImage].
class SubjectSegmenter {
/// A platform channel used to communicate with native code for segmentation
static const MethodChannel _channel =
MethodChannel('google_mlkit_subject_segmentation');

/// Instance id.
/// A unique identifier for the segmentation session, generated using the current timestamp
final id = DateTime.now().microsecondsSinceEpoch.toString();

/// Processes the given [InputImage] for segmentation.
///
/// Sends the [InputImage] data to the natvie platform via the method channel
/// Returns the segmentation mask in the given image or nil if there was an error.
Future<SubjectSegmenterMask> processImage(InputImage inputImage) async {
final results = await _channel
.invokeMethod('vision#startSubjectSegmenter', <String, dynamic>{
'id': id,
'imageData': inputImage.toJson(),
});
// Convert the JSON response from the platform into a SubjectSegmenterMask instance.
SubjectSegmenterMask masks = SubjectSegmenterMask.fromJson(results);
return masks;
}

/// Closes the detector and releases its resources.
/// Closes the detector and releases its resources associated with it.
///
/// This should be called when the detector is no longer needed to free up
/// system resources on the native side.
Future<void> close() =>
_channel.invokeMethod('vision#closeSubjectSegmenter', {'id': id});
}

/// A data class that represents the segmentation mask returned by the [SubjectSegmenterMask]
class SubjectSegmenterMask {
/// The width of the segmentation mask
final int width;

/// The height of the segmentation mask
final int height;

/// A list of subjects detected in the image, each respresented by a [Subject] instance
final List<Subject> subjects;

/// Constructir to create a instance of [SubjectSegmenterMask].
/// Constructor to create a instance of [SubjectSegmenterMask].
///
/// The [width] and [height] represent the dimensions of the mark,
/// and [subjects] is a list of detected subjects
SubjectSegmenterMask({
required this.width,
required this.height,
Expand All @@ -52,11 +66,21 @@ class SubjectSegmenterMask {
}
}

/// A data class that represents a detected subject within the segmentation mask.
class Subject {
/// Returns the starting x-coordinate of this subject in the input image.
final int startX;

/// Returns the starting y-coordinate of this subject in the input image.
final int startY;

/// Returns the width of this subject.
final int subjectWidth;

/// Returns the height of this subject.
final int subjectHeight;

/// A list of confidence values for the detected subject.
final List<double> confidences;

Subject(
Expand All @@ -66,6 +90,10 @@ class Subject {
required this.subjectHeight,
required this.confidences});

/// Creates an instance of [Subject] from a JSON object.
///
/// This factory constructor is used to convert JSON data into a [Subject] object.
factory Subject.fromJson(Map<dynamic, dynamic> json) {
return Subject(
startX: json['startX'] as int,
Expand All @@ -74,14 +102,4 @@ class Subject {
subjectHeight: json['height'] as int,
confidences: json['confidences']);
}

Map<dynamic, dynamic> toJson() {
return {
"startX": startX,
"startY": startY,
"subjectWidth": subjectWidth,
"subjectHeight": subjectHeight,
"confidences": confidences,
};
}
}

0 comments on commit df0e188

Please sign in to comment.