ClickHouse release 1.1.54289
Sept 20, 2017
SYSTEMqueries for server administration:
SYSTEM RELOAD DICTIONARY,
SYSTEM RELOAD DICTIONARIES,
SYSTEM DROP DNS CACHE,
- Added functions for working with arrays:
concat, arraySlice, arrayPushBack, arrayPushFront, arrayPopBack, arrayPopFront.
- Added the
identityparameters for the ZooKeeper configuration. This allows you to isolate individual users on the same ZooKeeper cluster.
- Added the aggregate functions
groupBitXor(for compatibility, they can also be accessed with the names
- External dictionaries can be loaded from MySQL by specifying a socket in the filesystem.
- External dictionaries can be loaded from MySQL over SSL (the
- Added the
max_network_bandwidth_for_usersetting to restrict the overall bandwidth use for queries per user.
- Support for
DROP TABLEfor temporary tables.
- Support for reading
DateTimevalues in Unix timestamp format from the
- Lagging replicas in distributed queries are now excluded by default (the default threshold is 5 minutes).
- FIFO locking is used during ALTER: an ALTER query isn't blocked indefinitely for continuously running queries.
- Option to set
umaskin the config file.
- Improved performance for queries with
- Improved the process for deleting old nodes in ZooKeeper. Previously, old nodes sometimes didn't get deleted if there were very frequent inserts, which caused the server to be slow to shut down, among other things.
- Fixed randomization when choosing hosts for the connection to ZooKeeper.
- Fixed the exclusion of lagging replicas in distributed queries if the replica is localhost.
- Fixed an error where a data part in a
ReplicatedMergeTreetable could be broken after running
ALTER MODIFYon an element in a
- Fixed an error that could cause SELECT queries to "hang".
- Improvements to distributed DDL queries.
- Fixed the query
CREATE TABLE ... AS <materialized view>.
- Resolved the deadlock in the
ALTER ... CLEAR COLUMN IN PARTITIONquery for
- Fixed the invalid default value for
Enums(0 instead of the minimum) when using the
- Resolved the appearance of zombie processes when using a dictionary with an
- Fixed segfault for the HEAD query.
Improvements to development workflow and ClickHouse build:
- You can use
pbuilderto build ClickHouse.
- You can use
libstdc++for builds on Linux.
- Added instructions for using static code analysis tools:
Coverity, clang-tidy, and
Please note when upgrading:
There is now a higher default value for the MergeTree setting
max_bytes_to_merge_at_max_space_in_pool (the maximum total size of data parts to merge, in bytes): it has increased from 100 GiB to 150 GiB. This might result in large merges running after the server upgrade, which could cause an increased load on the disk subsystem. If the free space available on the server is less than twice the total amount of the merges that are running, this will cause all other merges to stop running, including merges of small data parts. As a result, INSERT requests will fail with the message "Merges are processing significantly slower than inserts." Use the
SELECT * FROM system.merges request to monitor the situation. You can also check the
DiskSpaceReservedForMerge metric in the
system.metrics table, or in Graphite. You don't need to do anything to fix this, since the issue will resolve itself once the large merges finish. If you find this unacceptable, you can restore the previous value for the
max_bytes_to_merge_at_max_space_in_pool setting (to do this, go to the
<merge_tree> section in config.xml, set
<max_bytes_to_merge_at_max_space_in_pool>107374182400 </max_bytes_to_merge_at_max_space_in_pool> and restart the server).