Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo configuration script requires admin password #3329

Merged
merged 112 commits into from
Sep 27, 2023

Conversation

stephen-crawford
Copy link
Contributor

@stephen-crawford stephen-crawford commented Sep 7, 2023

Description

This change introduces an alternative to the default credentials admin:admin the security plugin currently uses.
In this implementation, a new setting is added to the opensearch.yml file. This setting, plugins.security.authcz.admin.password is parsed on plugin start up and then propagated into the internal user store of each node when they launch. During the set up of the security demo config, it will ask user to provide an initial admin password. User can either provide that by defining an environment variable 'initialAdminPassword' with a password string, or create a file 'initialAdminPassword.txt' with a single line that contains the password string and place it under the config folder. The absent of this 'initialAdminPassword' will lead to the failure of the demo config script.

The setting is used for input into a method in the UserService which updates the internal user store entry for the admin account. It will replace the hash field of the account with the hash of the password provided in the yml file.

Issues Resolved

Check List

  • New functionality includes testing
  • New functionality has been documented
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how that password from the config is used, how were you planning on resolving it?

I feel like changes around default credentials or default setup should be focused around The configuration repositories startup workflow so only if its a new security index the default admin user / password is added. Within this thread:

config/internal_users.yml Outdated Show resolved Hide resolved
@stephen-crawford
Copy link
Contributor Author

Hi @peternied, thanks for taking a minute to look things over.

I am still working on how to use it. It is not used right now because I have not found where I can populate it and guarantee the configurations are live before it tries to access them.

It is in draft because I am not quite ready for review--not that I don't appreciate you looking--I just am still working on it and wanted to be able to more easily review changes for my own work then what IntelliJ offers for the git diffs.

@stephen-crawford
Copy link
Contributor Author

I tried adding modifying the configuration repository to include a method that creates an admin user and updates the configuration for the internal users on bootstrap. Unfortunately, this led to several issues.

This is what I implemented:


    private void createAdminUser() throws IOException {
        String plaintextPassword = this.settings.get(ConfigConstants.SECURITY_AUTHCZ_ADMIN_DEFAULT_PASSWORD);
        String hashedPassword = hash(plaintextPassword.toCharArray());
        String userJsonAsString = "{ \"hash\" : \"" + hashedPassword + "\", \"opendistro_security_roles\": [\"admin\"], "
                + "\"attributes\": { \"service\": \"false\", "
                + "\"enabled\": \"false\"}"
                + " }\n";

        JsonNode accountInfo = DefaultObjectMapper.readTree(userJsonAsString);
        ObjectNode account = (ObjectNode) accountInfo;
        SecurityDynamicConfiguration<?> sdc = getConfiguration(CType.INTERNALUSERS);
        if (!sdc.exists("admin")) {
            System.out.println("Admin user not present so setting to new account: " + account.toPrettyString());
            sdc.putCObject(
                    "admin",
                    DefaultObjectMapper.readTree(account, sdc.getImplementingClass())
            );

        }
        saveAndUpdateConfigs(CType.INTERNALUSERS.toString(), client, CType.INTERNALUSERS, sdc);
        System.out.println("Finished creating admin user");
    }

    public static void saveAndUpdateConfigs(
            final String indexName,
            final Client client,
            final CType cType,
            final SecurityDynamicConfiguration<?> configuration
    ) {
        final IndexRequest ir = new IndexRequest(indexName);
        final String id = cType.toLCString();

        configuration.removeStatic();

        try {
            client.index(
                    ir.id(id)
                            .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
                            .setIfSeqNo(configuration.getSeqNo())
                            .setIfPrimaryTerm(configuration.getPrimaryTerm())
                            .source(id, XContentHelper.toXContent(configuration, XContentType.JSON, false))
            );
        } catch (IOException e) {
            throw ExceptionsHelper.convertToOpenSearchException(e);
        }

But we can see in the screenshot one issue--the nodes all run the same configuration repository start process and without a hardcoded hash, they all use their own salt for the password and therefore get different outputs.

This would probably work since when a request hits a node it would be able to resolve the password of its own salt, but I think we should avoid them being different. I am not sure the exact issues that could arise, but I suspect allowing the saved hashes to vary is a bad idea.

Screenshot 2023-09-11 at 11 51 27 AM

I am going to have to find another way to access the settings and make the configuration change as part of the registration process.

@stephen-crawford
Copy link
Contributor Author

Seems like something is wrong with how the update to the configuration is happening:

Checking if default password is empty
Default password is not empty
[2023-09-11T12:23:35,150][WARN ][org.opensearch.security.configuration.Salt] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2023-09-11T12:23:35,151][ERROR][org.opensearch.security.auditlog.sink.SinkProvider] Default endpoint could not be created, auditlog will not work properly.
[2023-09-11T12:23:35,151][WARN ][org.opensearch.security.auditlog.routing.AuditMessageRouter] No default storage available, audit log may not work properly. Please check configuration.
[2023-09-11T12:23:35,158][WARN ][org.opensearch.gateway.DanglingIndicesState] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
Admin user not present so setting to new account: {
  "hash" : "$2y$12$OG/c/fbEGN/zd6huu7rXV.qXIar4im8cdgfAqPTbOAxzr8VL/.0Ym",
  "opendistro_security_roles" : [ "admin" ],
  "attributes" : {
    "service" : "false",
    "enabled" : "false"
  }
}
Admin user not present so setting to new account: {
  "hash" : "$2y$12$KGgkb7/Ig7ZlTQgGAnyLjeXPaDVAsxPR5Egw9Jkahl9nGjaUG/FKa",
  "opendistro_security_roles" : [ "admin" ],
  "attributes" : {
    "service" : "false",
    "enabled" : "false"
  }
}
Updating index: INTERNALUSERS with CTYPE: INTERNALUSERS and configuration: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], admin=InternalUserV7 [hash=$2y$12$OG/c/fbEGN/zd6huu7rXV.qXIar4im8cdgfAqPTbOAxzr8VL/.0Ym, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={service=false, enabled=false}, description=null], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]}
Updating index: INTERNALUSERS with CTYPE: INTERNALUSERS and configuration: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], admin=InternalUserV7 [hash=$2y$12$KGgkb7/Ig7ZlTQgGAnyLjeXPaDVAsxPR5Egw9Jkahl9nGjaUG/FKa, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={service=false, enabled=false}, description=null], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]}
Result of update: index {[INTERNALUSERS][internalusers], source[n/a, actual length: [2.6kb], max length: 2kb]}  ir pipeline resolved? true
Result of update: index {[INTERNALUSERS][internalusers], source[n/a, actual length: [2.6kb], max length: 2kb]}  ir pipeline resolved? true
Finished creating admin user
Finished creating admin user
Internal users are: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]} admin user is: null
Internal users are: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]} admin user is: null
Admin user not present so setting to new account: {
  "hash" : "$2y$12$2Lm/4/73wd1cApSg8p0pCefkRDo1wvCtfB6QlUilL7u0mU9RNvGnu",
  "opendistro_security_roles" : [ "admin" ],
  "attributes" : {
    "service" : "false",
    "enabled" : "false"
  }
}
Updating index: INTERNALUSERS with CTYPE: INTERNALUSERS and configuration: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], admin=InternalUserV7 [hash=$2y$12$2Lm/4/73wd1cApSg8p0pCefkRDo1wvCtfB6QlUilL7u0mU9RNvGnu, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={service=false, enabled=false}, description=null], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]}
Result of update: index {[INTERNALUSERS][internalusers], source[n/a, actual length: [2.6kb], max length: 2kb]}  ir pipeline resolved? true
Finished creating admin user
Internal users are: {logstash=InternalUserV7 [hash=$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[logstash], attributes={}, description=Demo logstash user, using external role mapping], snapshotrestore=InternalUserV7 [hash=$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[snapshotrestore], attributes={}, description=Demo snapshotrestore user, using external role mapping], kibanaserver=InternalUserV7 [hash=$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H., enabled=false, service=false, reserved=true, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo OpenSearch Dashboards user], kibanaro=InternalUserV7 [hash=$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[kibanauser, readall], attributes={attribute1=value1, attribute2=value2, attribute3=value3}, description=Demo OpenSearch Dashboards read only user, using external role mapping], readall=InternalUserV7 [hash=$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2, enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[readall], attributes={}, description=Demo readall user, using external role mapping], anomalyadmin=InternalUserV7 [hash=$2y$12$TRwAAJgnNo67w3rVUz4FIeLx9Dy/llB79zf9I15CKJ9vkM4ZzAd3., enabled=false, service=false, reserved=false, hidden=false, _static=false, backend_roles=[], attributes={}, description=Demo anomaly admin user, using internal role]} admin user is: null
[2023-09-11T12:23:44,441][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num2][127.0.0.1:6833] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:44,441][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num2][127.0.0.1:6833] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:44,442][ERROR][

I would expect the admin user to be presented in the check after the index update but it is not. So for some reason, it does not get updated.

I also notice that the configuration loader complains that the nodes are not connected:

[2023-09-11T12:23:44,442][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num2][127.0.0.1:6833] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:44,442][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num2][127.0.0.1:6833] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:45,170][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num1][127.0.0.1:7349] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:45,171][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num1][127.0.0.1:7349] Node not connected retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, NODESDN, WHITELIST, ALLOWLIST, AUDIT] (index=.opendistro_security)
[2023-09-11T12:23:45,171][ERROR][org.opensearch.security.configuration.ConfigurationLoaderSecurity7] Failure [node_utest_n5_fnull_t11202367276541_num1][127.0.0.1:73

@stephen-crawford
Copy link
Contributor Author

Shard indexing pressure may be killing the node during the write?

g.opensearch.transport.SendRequestTransportException: [node_utest_n2_fnull_t15720773812791_num1][127.0.0.1:7862][internal:index/seq_no/resync[p]]
	at org.opensearch.transport.TransportService$4.doRun(TransportService.java:356) [opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:908) [opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: org.opensearch.node.NodeClosedException: node closed {node_utest_n2_fnull_t15720773812791_num1}{3xwCawAAQACLJeBYAAAAAA}{rTO27LqDQquW2-BEvhhJXQ}{127.0.0.1}{127.0.0.1:7862}{d}{shard_indexing_pressure_enabled=true}
	... 6 more
[2023-09-11T13:46:57,849][WARN ][org.opensearch.indices.cluster.IndicesClusterStateService] [.opendistro_security][0] marking and sending shard failed due to [shard failure, reason [exception during primary-replica resync]]
org.opensearch.transport.SendRequestTransportException: [node_utest_n2_fnull_t15720773812791_num1][127.0.0.1:7862][internal:index/seq_no/resync[p]]
	at org.opensearch.transport.TransportService$4.doRun(TransportService.java:356) ~[opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:908) ~[opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) ~[opensearch-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: org.opensearch.node.NodeClosedException: node closed {node_utest_n2_fnull_t15720773812791_num1}{3xwCawAAQACLJeBYAAAAAA}{rTO27LqDQquW2-BEvhhJXQ}{127.0.0.1}{127.0.0.1:7862}{d}{shard_indexing_pressure_enabled=true}
	... 6 more
[2023-09-11T13:46:57,850][WARN ][org.opensearch.cluster.action.shard.ShardStateAction] node closed while execution action [internal:cluster/shard/failure] for shard entry [shard id [[.opendistro_security][0]], allocation id [3iCSVeeqRMmJyBuLMdpo1A], primary term [0], message [shard failure, reason [exception during primary-replica resync]], failure [SendRequestTransportException[[node_utest_n2_fnull_t15720773812791_num1][127.0.0.1:7862][internal:index/seq_no/resync[p]]]; nested: NodeClosedException[node closed {node_utest_n2_fnull_t15720773812791_num1}{3xwCawAAQACLJeBYAAAAAA}{rTO27LqDQquW2-BEvhhJXQ}{127.0.0.1}{127.0.0.1:7862}{d}{shard_indexing_pressure_enabled=true}]; ], markAsStale [true]]

expected:<200> but was:<401>

@stephen-crawford
Copy link
Contributor Author

Screenshot 2023-09-12 at 10 44 09 AM

Current implementation is an imperfect solution which sometimes fails. Not sure why.

@stephen-crawford
Copy link
Contributor Author

1e587a1 Is a dynamic insertion of an admin user into the internal user configuration during the node start process. Unfortunately, it does not work because we have no guarantees around the timing or order of the processes.

Signed-off-by: Stephen Crawford <[email protected]>
Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about only supporting these 3 scenarios - are there others that I am missing?

No password found:

> ./install_demo_configuration.sh
>    Unable to find admin password for cluster, please run `export CLUSTER_ADMIN_PASSWORD=$(openssl rand -base64 48 | cut -c1-16)` or create a file {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD with a single line that contains the password followed by a newline

Password via environment variable

> export CLUSTER_ADMIN_PASSWORD=thisIsMyOwnPassword
> ./install_demo_configuration.sh
...
>    Found admin password in environment variable, please run `echo $CLUSTER_ADMIN_PASSWORD` to see the password
...
# SUCCESS can start ./bin/opensearch and use that password

Password via file

> openssl rand -base64 48 | cut -c1-16 > {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD
> ./install_demo_configuration.sh
...
>    Found admin password in password file, please run `cat {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD` to see the password
...
# SUCCESS can start ./bin/opensearch and use that password

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about only supporting these 3 scenarios - are there others that I am missing?

No password found:

> ./install_demo_configuration.sh
>    Unable to find admin password for cluster, please run `export CLUSTER_ADMIN_PASSWORD=$(openssl rand -base64 48 | cut -c1-16)` or create a file {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD with a single line that contains the password followed by a newline

Password via environment variable

> export CLUSTER_ADMIN_PASSWORD=thisIsMyOwnPassword
> ./install_demo_configuration.sh
...
>    Found admin password in environment variable, please run `echo $CLUSTER_ADMIN_PASSWORD` to see the password
...
# SUCCESS can start ./bin/opensearch and use that password

Password via file

> openssl rand -base64 48 | cut -c1-16 > {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD
> ./install_demo_configuration.sh
...
>    Found admin password in password file, please run `cat {OPENSEARCH_ROOT}/CLUSTER_ADMIN_PASSWORD` to see the password
...
# SUCCESS can start ./bin/opensearch and use that password

@stephen-crawford
Copy link
Contributor Author

Hi @peternied, that is basically the scenarios I am trying to cover now. The issue is that if we want to remove the default admin password from the internal configuration, we need to be able to provide one for the tests to use.

We could swap the tests to use an account other than admin which still had the all access permissions, but I am not sure whether that would be considered just as bad.

If you look at the changes I have made this far, you can see the trouble I am running into.

I have successfully added logic for a configuration setting to be read from opensearch.yml and used during the install script. I can similarly look for an environment variable--I have not set a name for it yet--should the config value not be present.

The problem is that this logic only executes with the installation script and the tests do not run the installation script.

We need something that will always add the admin user to the internal configuration whether it is a test or not. Otherwise, every test relying on SingleClusterTest and executing a request with admin:admin or admin:newPassword will fail.

There is no way to run the tests with no default password and no step of test execution which will create the new admin user they require.

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
@peternied
Copy link
Member

I think you can support those 3 scenarios without any Java code changes. Try using exactly the input in those examples to get those results.

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Darn, might need to figure out a way to generate the password, maybe reuse the hasher? This raises a question I'll look into, why are we using an algorithm that isn't a standard part of openssl.

config/internal_users.yml Outdated Show resolved Hide resolved
config/admin_password.txt Outdated Show resolved Hide resolved
tools/install_demo_configuration.sh Outdated Show resolved Hide resolved
tools/install_demo_configuration.bat Outdated Show resolved Hide resolved
tools/install_demo_configuration.sh Outdated Show resolved Hide resolved
tools/install_demo_configuration.bat Outdated Show resolved Hide resolved
stephen-crawford and others added 7 commits September 20, 2023 12:01
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Stephen Crawford <[email protected]>
peternied
peternied previously approved these changes Sep 27, 2023
@peternied
Copy link
Member

@DarshitChanpura @willyborankin @reta Could I get another look at this change?

@peternied peternied added the backport 2.x backport to 2.x branch label Sep 27, 2023
@peternied peternied changed the title Replace hardcoded admin:admin default credentials with operator specified password Demo configuration requires admin password to be provided Sep 27, 2023
@peternied peternied changed the title Demo configuration requires admin password to be provided Demo configuration script requires admin password Sep 27, 2023
@peternied peternied added the v2.11.0 Issues targeting the 2.11 release label Sep 27, 2023
@peternied peternied merged commit 8628a89 into opensearch-project:main Sep 27, 2023
59 checks passed
opensearch-trigger-bot bot pushed a commit that referenced this pull request Sep 27, 2023
This change requires an alternative to the default credentials
for the admin user.

The credentials can be provided to the script via:
- `initialAdminPassword` environment variable
- a file with a single line that contains the password.

The admin password for the cluster will be printed to the console output of the `tools/install_demo_configuration.(bat|sh)`

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Peter Nied <[email protected]>
Co-authored-by: Peter Nied <[email protected]>
(cherry picked from commit 8628a89)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
peternied added a commit that referenced this pull request Oct 3, 2023
Backport 8628a89 from #3329.

### Related Issues
-  Related #3285

Signed-off-by: Stephen Crawford <[email protected]>
Signed-off-by: Peter Nied <[email protected]>
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Peter Nied <[email protected]>
@stephen-crawford stephen-crawford deleted the adminConfigFile branch December 11, 2023 19:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x backport to 2.x branch v2.11.0 Issues targeting the 2.11 release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Replace admin:admin default credentials with configuration file password
3 participants