Skip to content

make ops structure flat #1265

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions include/umf/memory_pool_ops.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ extern "C" {
/// pointers.
///
typedef struct umf_memory_pool_ops_t {
/// Size of this structure.
/// Should be initialized with sizeof(umf_memory_pool_ops_t) in the current umf version
size_t size;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why version is not enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As user do not know mapping between size and version. As this structure might be bigger then user is aware (as it might be extended, in future umf versions), size field is needed if user needs to copy this structure. This is why this pr adds flexible array member at the end of the struct.

Adding size field was a conclusion of the linked issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As user do not know mapping between size and version. As this structure might be bigger then user is aware (as it might be extended, in future umf versions), size field is needed if user needs to copy this structure. This is why this pr adds flexible array member at the end of the struct.

Adding size field was a conclusion of the linked issue.

If I remember we already discussed this issue, and if I am not wrong we have not found the use case where it is needed. Could you please post a use case here where this size variable is used?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I am not against that, but I just do not understand why it is needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH i found this action step in final notes after discusion, in the issue, so i implemented it.

The use case is when the user want to add some extra function to our provider/pool.
To do so:

const umf_memory_pool_ops_t *ops = umfDisjointPoolOps();

umf_memory_pool_ops_t *my_ops = malloc(ops.size);
memcpy(my_ops, ops, ops.size);
my_ops.malloc = createMyCustomMallocWrapper(ops.alloc);

ofc this is also possible without size, but to do it in backward compatible way, user must reset version field (which let's be honest no one will do, and after umf update we will be blamed for compatibility break)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const umf_memory_pool_ops_t *ops = umfDisjointPoolOps();

umf_memory_pool_ops_t *my_ops = malloc(ops.size);
memcpy(my_ops, ops, ops.size);
my_ops.malloc = createMyCustomMallocWrapper(ops.alloc);

I think it is not a proper way to implement a wrapper on top of some memory provider. The proper way is to create a wrapper memory provider that will use a handle to another memory provider. This how our tracking provider is implemented. Another example is a tracing provider in our tests.

We should keep in mind that the proper way to use the memory provider is by creating a memory provider handle and use it via Memory Provider API, e.g. umfMemoryProviderAlloc, umfMemoryProviderFree.

The ops structures are just an API for Memory provider/pools developers provided by UMF (something like a plugin interface). The ops structures are intended to be used by UMF, not by someone else. Via ops structure we are telling to Memory provider/pools developers "if you want UMF to use your custom provider please give us the ops structure".

/// Version of the ops structure.
/// Should be initialized using UMF_POOL_OPS_VERSION_CURRENT.
uint32_t version;
Expand Down Expand Up @@ -126,6 +129,11 @@ typedef struct umf_memory_pool_ops_t {
///
umf_result_t (*get_last_allocation_error)(void *pool);

///
/// Following functions, with ext prefix, are optional and memory pool implementation
/// can keep them NULL.
///

///
/// @brief Control operation for the memory pool.
/// The function is used to perform various control operations
Expand All @@ -139,8 +147,12 @@ typedef struct umf_memory_pool_ops_t {
///
/// @return umf_result_t result of the control operation.
///
umf_result_t (*ctl)(void *hPool, int operationType, const char *name,
void *arg, umf_ctl_query_type_t queryType);
umf_result_t (*ext_ctl)(void *hPool, int operationType, const char *name,
void *arg, umf_ctl_query_type_t queryType);

/// Reserved for future use
/// Note: When copying this structure, use its provided size field rather than using sizeof.
char reserved[];
} umf_memory_pool_ops_t;

#ifdef __cplusplus
Expand Down
258 changes: 133 additions & 125 deletions include/umf/memory_provider_ops.h

Large diffs are not rendered by default.

29 changes: 21 additions & 8 deletions src/memory_pool.c
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ static int CTL_SUBTREE_HANDLER(by_handle_pool)(void *ctx,
umf_ctl_query_type_t queryType) {
(void)indexes, (void)source;
umf_memory_pool_handle_t hPool = (umf_memory_pool_handle_t)ctx;
hPool->ops.ctl(hPool, /*unused*/ 0, extra_name, arg, queryType);
hPool->ops.ext_ctl(hPool, /*unused*/ 0, extra_name, arg, queryType);
return 0;
}

Expand Down Expand Up @@ -58,18 +58,31 @@ static umf_result_t umfPoolCreateInternal(const umf_memory_pool_ops_t *ops,
}

umf_result_t ret = UMF_RESULT_SUCCESS;
umf_memory_pool_handle_t pool =
umf_ba_global_alloc(sizeof(umf_memory_pool_t));
if (!pool) {
return UMF_RESULT_ERROR_OUT_OF_HOST_MEMORY;
}

if (ops->version != UMF_POOL_OPS_VERSION_CURRENT) {
LOG_WARN("Memory Pool ops version \"%d\" is different than the current "
"version \"%d\"",
ops->version, UMF_POOL_OPS_VERSION_CURRENT);
}

if (ops->size == 0) {
LOG_ERR("Memory Pool ops size is not set");
return UMF_RESULT_ERROR_INVALID_ARGUMENT;
}

if (ops->size > sizeof(umf_memory_pool_ops_t)) {
LOG_ERR("Memory Pool ops size \"%zu\" is greater than the current "
"size \"%zu\"",
ops->size, sizeof(umf_memory_pool_ops_t));
return UMF_RESULT_ERROR_INVALID_ARGUMENT;
}

umf_memory_pool_handle_t pool =
umf_ba_global_alloc(sizeof(umf_memory_pool_t));
if (!pool) {
return UMF_RESULT_ERROR_OUT_OF_HOST_MEMORY;
}

if (!(flags & UMF_POOL_CREATE_FLAG_DISABLE_TRACKING)) {
// Wrap provider with memory tracking provider.
ret = umfTrackingMemoryProviderCreate(provider, pool, &pool->provider);
Expand All @@ -84,8 +97,8 @@ static umf_result_t umfPoolCreateInternal(const umf_memory_pool_ops_t *ops,
pool->ops = *ops;
pool->tag = NULL;

if (NULL == pool->ops.ctl) {
pool->ops.ctl = umfDefaultCtlPoolHandle;
if (NULL == pool->ops.ext_ctl) {
pool->ops.ext_ctl = umfDefaultCtlPoolHandle;
}

if (NULL == utils_mutex_init(&pool->lock)) {
Expand Down
131 changes: 77 additions & 54 deletions src/memory_provider.c
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ static int CTL_SUBTREE_HANDLER(by_handle_provider)(
umf_ctl_query_type_t queryType) {
(void)indexes, (void)source;
umf_memory_provider_handle_t hProvider = (umf_memory_provider_handle_t)ctx;
hProvider->ops.ctl(hProvider->provider_priv, /*unused*/ 0, extra_name, arg,
queryType);
hProvider->ops.ext_ctl(hProvider->provider_priv, /*unused*/ 0, extra_name,
arg, queryType);
return 0;
}

Expand Down Expand Up @@ -120,67 +120,78 @@ static umf_result_t umfDefaultCtlHandle(void *provider, int operationType,
}

void assignOpsExtDefaults(umf_memory_provider_ops_t *ops) {
if (!ops->ext.purge_lazy) {
ops->ext.purge_lazy = umfDefaultPurgeLazy;
if (!ops->ext_purge_lazy) {
ops->ext_purge_lazy = umfDefaultPurgeLazy;
}
if (!ops->ext.purge_force) {
ops->ext.purge_force = umfDefaultPurgeForce;
if (!ops->ext_purge_force) {
ops->ext_purge_force = umfDefaultPurgeForce;
}
if (!ops->ext.allocation_split) {
ops->ext.allocation_split = umfDefaultAllocationSplit;
if (!ops->ext_allocation_split) {
ops->ext_allocation_split = umfDefaultAllocationSplit;
}
if (!ops->ext.allocation_merge) {
ops->ext.allocation_merge = umfDefaultAllocationMerge;
if (!ops->ext_allocation_merge) {
ops->ext_allocation_merge = umfDefaultAllocationMerge;
}
}

void assignOpsIpcDefaults(umf_memory_provider_ops_t *ops) {
if (!ops->ipc.get_ipc_handle_size) {
ops->ipc.get_ipc_handle_size = umfDefaultGetIPCHandleSize;
if (!ops->ext_get_ipc_handle_size) {
ops->ext_get_ipc_handle_size = umfDefaultGetIPCHandleSize;
}
if (!ops->ipc.get_ipc_handle) {
ops->ipc.get_ipc_handle = umfDefaultGetIPCHandle;
if (!ops->ext_get_ipc_handle) {
ops->ext_get_ipc_handle = umfDefaultGetIPCHandle;
}
if (!ops->ipc.put_ipc_handle) {
ops->ipc.put_ipc_handle = umfDefaultPutIPCHandle;
if (!ops->ext_put_ipc_handle) {
ops->ext_put_ipc_handle = umfDefaultPutIPCHandle;
}
if (!ops->ipc.open_ipc_handle) {
ops->ipc.open_ipc_handle = umfDefaultOpenIPCHandle;
if (!ops->ext_open_ipc_handle) {
ops->ext_open_ipc_handle = umfDefaultOpenIPCHandle;
}
if (!ops->ipc.close_ipc_handle) {
ops->ipc.close_ipc_handle = umfDefaultCloseIPCHandle;
if (!ops->ext_close_ipc_handle) {
ops->ext_close_ipc_handle = umfDefaultCloseIPCHandle;
}
if (!ops->ctl) {
ops->ctl = umfDefaultCtlHandle;
if (!ops->ext_ctl) {
ops->ext_ctl = umfDefaultCtlHandle;
}
}

static bool validateOpsMandatory(const umf_memory_provider_ops_t *ops) {
// Mandatory ops should be non-NULL
return ops->alloc && ops->free && ops->get_recommended_page_size &&
ops->get_min_page_size && ops->initialize && ops->finalize &&
ops->get_last_native_error && ops->get_name;
}
#define CHECK_OP(ops, fn) \
if (!(ops)->fn) { \
LOG_ERR("Error: missing function pointer: %s\n", #fn); \
return false; \
}

static bool validateOpsExt(const umf_memory_provider_ext_ops_t *ext) {
// split and merge functions should be both NULL or both non-NULL
return (ext->allocation_split && ext->allocation_merge) ||
(!ext->allocation_split && !ext->allocation_merge);
}
static bool validateOps(const umf_memory_provider_ops_t *ops) {
// Validate mandatory operations one by one
CHECK_OP(ops, alloc);
CHECK_OP(ops, free);
CHECK_OP(ops, get_recommended_page_size);
CHECK_OP(ops, get_min_page_size);
CHECK_OP(ops, initialize);
CHECK_OP(ops, finalize);
CHECK_OP(ops, get_last_native_error);
CHECK_OP(ops, get_name);

if ((ops->ext_allocation_split == NULL) !=
(ops->ext_allocation_merge == NULL)) {
LOG_ERR("Error: ext_allocation_split and ext_allocation_merge must be "
"both set or both NULL\n");
return false;
}

static bool validateOpsIpc(const umf_memory_provider_ipc_ops_t *ipc) {
// valid if all ops->ipc.* are non-NULL or all are NULL
return (ipc->get_ipc_handle_size && ipc->get_ipc_handle &&
ipc->put_ipc_handle && ipc->open_ipc_handle &&
ipc->close_ipc_handle) ||
(!ipc->get_ipc_handle_size && !ipc->get_ipc_handle &&
!ipc->put_ipc_handle && !ipc->open_ipc_handle &&
!ipc->close_ipc_handle);
}
bool ipcAllSet = ops->ext_get_ipc_handle_size && ops->ext_get_ipc_handle &&
ops->ext_put_ipc_handle && ops->ext_open_ipc_handle &&
ops->ext_close_ipc_handle;
bool ipcAllNull = !ops->ext_get_ipc_handle_size &&
!ops->ext_get_ipc_handle && !ops->ext_put_ipc_handle &&
!ops->ext_open_ipc_handle && !ops->ext_close_ipc_handle;
if (!ipcAllSet && !ipcAllNull) {
LOG_ERR("Error: IPC function pointers must be either all set or all "
"NULL\n");
return false;
}

static bool validateOps(const umf_memory_provider_ops_t *ops) {
return validateOpsMandatory(ops) && validateOpsExt(&(ops->ext)) &&
validateOpsIpc(&(ops->ipc));
return true;
}

umf_result_t umfMemoryProviderCreate(const umf_memory_provider_ops_t *ops,
Expand All @@ -197,6 +208,18 @@ umf_result_t umfMemoryProviderCreate(const umf_memory_provider_ops_t *ops,
ops->version, UMF_PROVIDER_OPS_VERSION_CURRENT);
}

if (ops->size == 0) {
LOG_ERR("Memory Provider ops size is zero");
return UMF_RESULT_ERROR_INVALID_ARGUMENT;
}

if (ops->size > sizeof(umf_memory_provider_ops_t)) {
LOG_ERR("Memory Provider ops size \"%zu\" is greater than the "
"current size \"%zu\"",
ops->size, sizeof(umf_memory_provider_ops_t));
return UMF_RESULT_ERROR_INVALID_ARGUMENT;
}

umf_memory_provider_handle_t provider =
umf_ba_global_alloc(sizeof(umf_memory_provider_t));
if (!provider) {
Expand Down Expand Up @@ -300,7 +323,7 @@ umf_result_t umfMemoryProviderPurgeLazy(umf_memory_provider_handle_t hProvider,
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((ptr != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
umf_result_t res =
hProvider->ops.ext.purge_lazy(hProvider->provider_priv, ptr, size);
hProvider->ops.ext_purge_lazy(hProvider->provider_priv, ptr, size);
checkErrorAndSetLastProvider(res, hProvider);
return res;
}
Expand All @@ -310,7 +333,7 @@ umf_result_t umfMemoryProviderPurgeForce(umf_memory_provider_handle_t hProvider,
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((ptr != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
umf_result_t res =
hProvider->ops.ext.purge_force(hProvider->provider_priv, ptr, size);
hProvider->ops.ext_purge_force(hProvider->provider_priv, ptr, size);
checkErrorAndSetLastProvider(res, hProvider);
return res;
}
Expand All @@ -329,7 +352,7 @@ umfMemoryProviderAllocationSplit(umf_memory_provider_handle_t hProvider,
UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((firstSize < totalSize), UMF_RESULT_ERROR_INVALID_ARGUMENT);

umf_result_t res = hProvider->ops.ext.allocation_split(
umf_result_t res = hProvider->ops.ext_allocation_split(
hProvider->provider_priv, ptr, totalSize, firstSize);
checkErrorAndSetLastProvider(res, hProvider);
return res;
Expand All @@ -347,7 +370,7 @@ umfMemoryProviderAllocationMerge(umf_memory_provider_handle_t hProvider,
UMF_CHECK(((uintptr_t)highPtr - (uintptr_t)lowPtr < totalSize),
UMF_RESULT_ERROR_INVALID_ARGUMENT);

umf_result_t res = hProvider->ops.ext.allocation_merge(
umf_result_t res = hProvider->ops.ext_allocation_merge(
hProvider->provider_priv, lowPtr, highPtr, totalSize);
checkErrorAndSetLastProvider(res, hProvider);
return res;
Expand All @@ -358,7 +381,7 @@ umfMemoryProviderGetIPCHandleSize(umf_memory_provider_handle_t hProvider,
size_t *size) {
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((size != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
return hProvider->ops.ipc.get_ipc_handle_size(hProvider->provider_priv,
return hProvider->ops.ext_get_ipc_handle_size(hProvider->provider_priv,
size);
}

Expand All @@ -369,7 +392,7 @@ umfMemoryProviderGetIPCHandle(umf_memory_provider_handle_t hProvider,
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((ptr != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((providerIpcData != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
return hProvider->ops.ipc.get_ipc_handle(hProvider->provider_priv, ptr,
return hProvider->ops.ext_get_ipc_handle(hProvider->provider_priv, ptr,
size, providerIpcData);
}

Expand All @@ -378,7 +401,7 @@ umfMemoryProviderPutIPCHandle(umf_memory_provider_handle_t hProvider,
void *providerIpcData) {
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((providerIpcData != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
return hProvider->ops.ipc.put_ipc_handle(hProvider->provider_priv,
return hProvider->ops.ext_put_ipc_handle(hProvider->provider_priv,
providerIpcData);
}

Expand All @@ -388,7 +411,7 @@ umfMemoryProviderOpenIPCHandle(umf_memory_provider_handle_t hProvider,
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((providerIpcData != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((ptr != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
return hProvider->ops.ipc.open_ipc_handle(hProvider->provider_priv,
return hProvider->ops.ext_open_ipc_handle(hProvider->provider_priv,
providerIpcData, ptr);
}

Expand All @@ -397,6 +420,6 @@ umfMemoryProviderCloseIPCHandle(umf_memory_provider_handle_t hProvider,
void *ptr, size_t size) {
UMF_CHECK((hProvider != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
UMF_CHECK((ptr != NULL), UMF_RESULT_ERROR_INVALID_ARGUMENT);
return hProvider->ops.ipc.close_ipc_handle(hProvider->provider_priv, ptr,
return hProvider->ops.ext_close_ipc_handle(hProvider->provider_priv, ptr,
size);
}
1 change: 1 addition & 0 deletions src/pool/pool_disjoint.c
Original file line number Diff line number Diff line change
Expand Up @@ -929,6 +929,7 @@ void disjoint_pool_finalize(void *pool) {
}

static umf_memory_pool_ops_t UMF_DISJOINT_POOL_OPS = {
.size = sizeof(umf_memory_provider_ops_t),
.version = UMF_VERSION_CURRENT,
.initialize = disjoint_pool_initialize,
.finalize = disjoint_pool_finalize,
Expand Down
1 change: 1 addition & 0 deletions src/pool/pool_jemalloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -527,6 +527,7 @@ static umf_result_t op_get_last_allocation_error(void *pool) {
}

static umf_memory_pool_ops_t UMF_JEMALLOC_POOL_OPS = {
.size = sizeof(umf_memory_provider_ops_t),
.version = UMF_POOL_OPS_VERSION_CURRENT,
.initialize = op_initialize,
.finalize = op_finalize,
Expand Down
1 change: 1 addition & 0 deletions src/pool/pool_proxy.c
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ static umf_result_t proxy_get_last_allocation_error(void *pool) {
}

static umf_memory_pool_ops_t UMF_PROXY_POOL_OPS = {
.size = sizeof(umf_memory_pool_ops_t),
.version = UMF_POOL_OPS_VERSION_CURRENT,
.initialize = proxy_pool_initialize,
.finalize = proxy_pool_finalize,
Expand Down
3 changes: 2 additions & 1 deletion src/pool/pool_scalable.c
Original file line number Diff line number Diff line change
Expand Up @@ -444,6 +444,7 @@ static umf_result_t pool_ctl(void *hPool, int operationType, const char *name,
}

static umf_memory_pool_ops_t UMF_SCALABLE_POOL_OPS = {
.size = sizeof(umf_memory_provider_ops_t),
.version = UMF_POOL_OPS_VERSION_CURRENT,
.initialize = tbb_pool_initialize,
.finalize = tbb_pool_finalize,
Expand All @@ -454,7 +455,7 @@ static umf_memory_pool_ops_t UMF_SCALABLE_POOL_OPS = {
.malloc_usable_size = tbb_malloc_usable_size,
.free = tbb_free,
.get_last_allocation_error = tbb_get_last_allocation_error,
.ctl = pool_ctl};
.ext_ctl = pool_ctl};

const umf_memory_pool_ops_t *umfScalablePoolOps(void) {
return &UMF_SCALABLE_POOL_OPS;
Expand Down
Loading
Loading