-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat]: Right Click on Table - INSERT #539
Comments
This was referenced Nov 3, 2024
@mhmdkrmabd one thing. You know you added the uuid() function option for uuid data types on the insert. Make sure to look at the other data type functions (https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/functions.html) theres a few in there like uuid(), now(), current_timestamp(), current_date() etc.. |
mhmdkrmabd
added a commit
that referenced
this issue
Feb 17, 2025
- Ticket #539: Table data INSERTION process - The workbench now supports the insertion process for all standard tables via the UI through right-click actions. - A tree view structure has been implemented for listing table structures. This structure accommodates all table complexities, including various depths and nesting levels, ensuring accurate representation. - Real-time validation is provided for all supported Cassandra data types, such as integer, date, UUID, etc. - For each data type, appropriate functions and tools are available for seamless insertion. For example, date types include user-friendly date/time pickers, while UUID types are automatically populated with relevant functions like `uuid()`. - For `blob` type, a file with maxium set size in the workbench's config file can be uploaded, the workbench will convert it to hexadecimal string, blob content can also be previewed easily - if it's safe to do that -, for now, images and documents are fully supported. - This feature can be disabled via the option `previewBlob` under the `features` section, the maximum allowed size for a file to be uploaded and converted to proper blob content can be changed via the option `insertBlobSize` under the `limit` section. - NULL values can be easily set for fields, alongside the ability to ignore non-mandatory fields during insertion. - Data requiring conversion via built-in Cassandra functions is automatically handled. For instance, when inserting a timestamp into a date type, the workbench will automatically include the toDate() function in the generated insertion statement. - Support has been added for setting the TTL (Time to Live), data creation timestamp, and write consistency level during data insertion. - JSON data format insertion is now supported, including the ability to set default values for omitted columns in JSON format. - Minor changes and update - New phrases have been added to the languages files. - New packages have been added, and other have been removed. - Renamed a few labels in the UI. - Minor other changes and updates.
digiserg
pushed a commit
that referenced
this issue
Feb 17, 2025
) * Mainly for ticket #539, bunch of changes, updates and improvements - Ticket #539: Table data INSERTION process - The workbench now supports the insertion process for all standard tables via the UI through right-click actions. - A tree view structure has been implemented for listing table structures. This structure accommodates all table complexities, including various depths and nesting levels, ensuring accurate representation. - Real-time validation is provided for all supported Cassandra data types, such as integer, date, UUID, etc. - For each data type, appropriate functions and tools are available for seamless insertion. For example, date types include user-friendly date/time pickers, while UUID types are automatically populated with relevant functions like `uuid()`. - For `blob` type, a file with maxium set size in the workbench's config file can be uploaded, the workbench will convert it to hexadecimal string, blob content can also be previewed easily - if it's safe to do that -, for now, images and documents are fully supported. - This feature can be disabled via the option `previewBlob` under the `features` section, the maximum allowed size for a file to be uploaded and converted to proper blob content can be changed via the option `insertBlobSize` under the `limit` section. - NULL values can be easily set for fields, alongside the ability to ignore non-mandatory fields during insertion. - Data requiring conversion via built-in Cassandra functions is automatically handled. For instance, when inserting a timestamp into a date type, the workbench will automatically include the toDate() function in the generated insertion statement. - Support has been added for setting the TTL (Time to Live), data creation timestamp, and write consistency level during data insertion. - JSON data format insertion is now supported, including the ability to set default values for omitted columns in JSON format. - Minor changes and update - New phrases have been added to the languages files. - New packages have been added, and other have been removed. - Renamed a few labels in the UI. - Minor other changes and updates. * Updated the maximum allowed size for blob in the insertion process
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We need a wizard approach for generating INSERT CQL statments.
Do not display this on counter tables as they should use a different right click action (see #700 ). Also see #702 as its related also
The goal here is to have a wizard essentially generating the CQL with valid syntax with placeholders in the generated text for them to update/generate the CQL text prior to executing.
This will be quite complex, particulalry when handling Collections and UDT types
See: https://cassandra.apache.org/doc/latest/cassandra/developing/cql/dml.html#insert-statement
Requirements
'
correct based on the types.LOCAL_ONE
Examples
Below are CQL INSERT examples covering the various Cassandra data types. When generating the CQL with the placeholder for users to enter the values in the text editor, make sure to get the
'
correct based on the types, also add the comments showing the data typesBasic Native Types
Collection Types
User-Defined Types
Complex Nested Collections
TTL examples
Using TIMESTAMP example
Write Consistency Levels
ALL - A write must be written to the commit log and memtable on all replica nodes in the cluster for that partition. This provides the highest consistency and the lowest availability.
EACH_QUORUM - A write must be written to the commit log and memtable on a quorum of replica nodes in each datacenter. This level ensures strong consistency across datacenters.
QUORUM - A write must be written to the commit log and memtable on a quorum of replica nodes. This level balances consistency and availability, ensuring strong consistency if some level of node failure is acceptable.
LOCAL_QUORUM - A write must be written to the commit log and memtable on a quorum of replica nodes within the local datacenter. This reduces consistency for higher availability within a single datacenter.
ONE - A write must be written to the commit log and memtable of at least one replica node. This is the default level and provides high availability at the cost of consistency.
TWO - A write must be written to the commit log and memtable of at least two replica nodes. This level provides slightly higher consistency than ONE.
THREE - A write must be written to the commit log and memtable of at least three replica nodes. This level provides a bit more consistency than TWO.
LOCAL_ONE - A write must be written to the commit log and memtable of at least one replica node within the local datacenter. This provides high availability within the local datacenter at the cost of consistency.
ANY - A write can be considered successful with acknowledgment from any replica node. This level provides the highest availability, sacrificing consistency. It's useful in situations where data loss is acceptable if no replicas are available.
Insert Using JSON
https://cassandra.apache.org/doc/stable/cassandra/cql/json.html
You can do an insert into Cassandra using JSON to map the values for columns as key values eg
Where it gets complex is around the JSON encoding for the data types
sql
INSERT INTO mytable JSON '{ ""myColumn"": "value", "regular_column": 123 }';
Also, the handling of UDTs needs to be looked at carefully eg
The text was updated successfully, but these errors were encountered: