+++ /dev/null
-= vendor/github.com/karrick/godirwalk licensed under: =
-
-BSD 2-Clause License
-
-Copyright (c) 2017, Karrick McDermott
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this
- list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-= vendor/github.com/karrick/godirwalk/LICENSE 7bea66fc0a31c6329f9392034bee75d2
+++ /dev/null
-= vendor/github.com/mistifyio/go-zfs licensed under: =
-
-Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "{}"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright (c) 2014, OmniTI Computer Consulting, Inc.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-= vendor/github.com/mistifyio/go-zfs/LICENSE cce9462224bfb44c1866ef7bd5eddf54
github.com/go-logr/logr v1.4.3
github.com/go-openapi/jsonreference v0.20.2
github.com/godbus/dbus/v5 v5.2.0
- github.com/google/cadvisor v0.53.0
+ github.com/google/cadvisor v0.55.1
github.com/google/cel-go v0.26.0
github.com/google/gnostic-models v0.7.0
github.com/google/go-cmp v0.7.0
github.com/jonboulle/clockwork v0.5.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
- github.com/karrick/godirwalk v1.17.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
-github.com/docker/docker v28.2.2+incompatible h1:CjwRSksz8Yo4+RmQ339Dp/D2tGO5JxwYeqtMOEe0LDw=
-github.com/docker/docker v28.2.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
-github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
-github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
+github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
+github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
+github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
-github.com/google/cadvisor v0.53.0 h1:pmveUw2VBlr/T2SBE9Fsp8gdLhKWyOBkECGbaas9mcI=
-github.com/google/cadvisor v0.53.0/go.mod h1:Tz3zf/exzFfdWd1T/U/9eNst0ZR2C6CIV62LJATj5tg=
+github.com/google/cadvisor v0.55.1 h1:81OXN/Hr9RVME6NJw2DI7YKoyR4MkGESrgjKlCAiHBk=
+github.com/google/cadvisor v0.55.1/go.mod h1:Zbo4qO/Nyvsy7PfNAcBkXJz2G/VzYytUUc+iNqX8px0=
github.com/google/cel-go v0.26.0 h1:DPGjXackMpJWH680oGY4lZhYjIameYmR+/6RBdDGmaI=
github.com/google/cel-go v0.26.0/go.mod h1:A9O8OU9rdvrK5MQyrqfIxo1a0u4g3sF8KB6PUIaryMM=
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
-github.com/karrick/godirwalk v1.17.0 h1:b4kY7nqDdioR/6qnbHQyDvmA17u5G1cZ6J+CZXwSWoI=
-github.com/karrick/godirwalk v1.17.0/go.mod h1:j4mkqPuvaLI8mp1DroR3P6ad7cyYd4c1qeJ3RV7ULlk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
-github.com/opencontainers/runc v1.3.0/go.mod h1:9wbWt42gV+KRxKRVVugNP6D5+PQciRbenB4fLVsqGPs=
+github.com/opencontainers/runc v1.3.3/go.mod h1:D7rL72gfWxVs9cJ2/AayxB0Hlvn9g0gaF1R7uunumSI=
github.com/opencontainers/runtime-spec v1.2.1 h1:S4k4ryNgEpxW1dzyqffOmhI1BHYcjzU8lpJfSlR0xww=
github.com/opencontainers/runtime-spec v1.2.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.13.0 h1:Zza88GWezyT7RLql12URvoxsbLfjFx988+LGaWfbL84=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
-github.com/opencontainers/runc v1.3.0 h1:cvP7xbEvD0QQAs0nZKLzkVog2OPZhI/V2w3WmTmUSXI=
+github.com/opencontainers/runc v1.3.3 h1:qlmBbbhu+yY0QM7jqfuat7M1H3/iXjju3VkP9lkFQr4=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e h1:aoZm08cpOy4WuID//EZDgcC4zIxODThtZNPirFr42+A=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/rogpeppe/fastuuid v1.2.0 h1:Ppwyp6VYCF1nvBTXL3trRso7mXMlRrw9ooo375wvi2s=
_ "github.com/google/cadvisor/container/crio/install"
_ "github.com/google/cadvisor/container/systemd/install"
+ // Register filesystem plugins needed for container stats.
+ _ "github.com/google/cadvisor/fs/btrfs/install"
+ _ "github.com/google/cadvisor/fs/devicemapper/install"
+ _ "github.com/google/cadvisor/fs/nfs/install"
+ _ "github.com/google/cadvisor/fs/overlay/install"
+ _ "github.com/google/cadvisor/fs/tmpfs/install"
+ _ "github.com/google/cadvisor/fs/vfs/install"
+
"github.com/google/cadvisor/cache/memory"
cadvisormetrics "github.com/google/cadvisor/container"
cadvisorapi "github.com/google/cadvisor/info/v1"
// ErrDataNotFound is the error resulting if failed to find a container in memory cache.
var ErrDataNotFound = errors.New("unable to find data in memory cache")
+// containerCacheMap is a typed wrapper around sync.Map that eliminates the need
+// for type assertions at every call site. It stores container name strings
+// mapped to *containerCache values.
+type containerCacheMap struct {
+ m sync.Map
+}
+
+// Load retrieves a container cache by name. Returns nil, false if not found.
+func (c *containerCacheMap) Load(name string) (*containerCache, bool) {
+ v, ok := c.m.Load(name)
+ if !ok {
+ return nil, false
+ }
+ return v.(*containerCache), true
+}
+
+// Store saves a container cache with the given name.
+func (c *containerCacheMap) Store(name string, cache *containerCache) {
+ c.m.Store(name, cache)
+}
+
+// LoadOrStore returns the existing cache if present, otherwise stores and returns the given one.
+func (c *containerCacheMap) LoadOrStore(name string, cache *containerCache) (*containerCache, bool) {
+ v, loaded := c.m.LoadOrStore(name, cache)
+ return v.(*containerCache), loaded
+}
+
+// Delete removes a container cache by name.
+func (c *containerCacheMap) Delete(name string) {
+ c.m.Delete(name)
+}
+
// TODO(vmarmol): See about refactoring this class, we have an unnecessary redirection of containerCache and InMemoryCache.
// containerCache is used to store per-container information
type containerCache struct {
}
type InMemoryCache struct {
- lock sync.RWMutex
- containerCacheMap map[string]*containerCache
+ containerCacheMap containerCacheMap
maxAge time.Duration
backend []storage.StorageDriver
}
func (c *InMemoryCache) AddStats(cInfo *info.ContainerInfo, stats *info.ContainerStats) error {
- var cstore *containerCache
- var ok bool
-
- func() {
- c.lock.Lock()
- defer c.lock.Unlock()
- if cstore, ok = c.containerCacheMap[cInfo.ContainerReference.Name]; !ok {
- cstore = newContainerStore(cInfo.ContainerReference, c.maxAge)
- c.containerCacheMap[cInfo.ContainerReference.Name] = cstore
- }
- }()
+ name := cInfo.ContainerReference.Name
+ cstore, ok := c.containerCacheMap.Load(name)
+ if !ok {
+ newStore := newContainerStore(cInfo.ContainerReference, c.maxAge)
+ cstore, _ = c.containerCacheMap.LoadOrStore(name, newStore)
+ }
for _, backend := range c.backend {
// TODO(monnand): To deal with long delay write operations, we
}
func (c *InMemoryCache) RecentStats(name string, start, end time.Time, maxStats int) ([]*info.ContainerStats, error) {
- var cstore *containerCache
- var ok bool
- err := func() error {
- c.lock.RLock()
- defer c.lock.RUnlock()
- if cstore, ok = c.containerCacheMap[name]; !ok {
- return ErrDataNotFound
- }
- return nil
- }()
- if err != nil {
- return nil, err
+ cstore, ok := c.containerCacheMap.Load(name)
+ if !ok {
+ return nil, ErrDataNotFound
}
-
return cstore.RecentStats(start, end, maxStats)
}
func (c *InMemoryCache) Close() error {
- c.lock.Lock()
- c.containerCacheMap = make(map[string]*containerCache, 32)
- c.lock.Unlock()
+ c.containerCacheMap = containerCacheMap{}
return nil
}
func (c *InMemoryCache) RemoveContainer(containerName string) error {
- c.lock.Lock()
- delete(c.containerCacheMap, containerName)
- c.lock.Unlock()
+ c.containerCacheMap.Delete(containerName)
return nil
}
maxAge time.Duration,
backend []storage.StorageDriver,
) *InMemoryCache {
- ret := &InMemoryCache{
- containerCacheMap: make(map[string]*containerCache, 32),
- maxAge: maxAge,
- backend: backend,
+ return &InMemoryCache{
+ maxAge: maxAge,
+ backend: backend,
}
- return ret
}
// TODO : Add checks for validity of config file (eg : Accurate JSON fields)
if len(configInJSON.MetricsConfig) == 0 {
- return nil, fmt.Errorf("No metrics provided in config")
+ return nil, fmt.Errorf("no metrics provided in config")
}
minPollFrequency := time.Duration(0)
regexprs[ind], err = regexp.Compile(metricConfig.Regex)
if err != nil {
- return nil, fmt.Errorf("Invalid regexp %v for metric %v", metricConfig.Regex, metricConfig.Name)
+ return nil, fmt.Errorf("invalid regexp %v for metric %v", metricConfig.Regex, metricConfig.Name)
}
}
}
if len(configInJSON.MetricsConfig) > metricCountLimit {
- return nil, fmt.Errorf("Too many metrics defined: %d limit: %d", len(configInJSON.MetricsConfig), metricCountLimit)
+ return nil, fmt.Errorf("too many metrics defined: %d limit: %d", len(configInJSON.MetricsConfig), metricCountLimit)
}
return &GenericCollector{
}
} else {
- errorSlice = append(errorSlice, fmt.Errorf("Unexpected value of 'data_type' for metric '%v' in config ", metricConfig.Name))
+ errorSlice = append(errorSlice, fmt.Errorf("unexpected value of 'data_type' for metric '%v' in config ", metricConfig.Name))
}
} else {
- errorSlice = append(errorSlice, fmt.Errorf("No match found for regexp: %v for metric '%v' in config", metricConfig.Regex, metricConfig.Name))
+ errorSlice = append(errorSlice, fmt.Errorf("no match found for regexp: %v for metric '%v' in config", metricConfig.Regex, metricConfig.Name))
}
}
return nextCollectionTime, metrics, compileErrors(errorSlice)
}
if metricCountLimit < 0 {
- return nil, fmt.Errorf("Metric count limit must be greater than or equal to 0")
+ return nil, fmt.Errorf("metric count limit must be greater than or equal to 0")
}
var metricsSet map[string]bool
}
if len(configInJSON.MetricsConfig) > metricCountLimit {
- return nil, fmt.Errorf("Too many metrics defined: %d limit %d", len(configInJSON.MetricsConfig), metricCountLimit)
+ return nil, fmt.Errorf("too many metrics defined: %d limit %d", len(configInJSON.MetricsConfig), metricCountLimit)
}
// TODO : Add checks for validity of config file (eg : Accurate JSON fields)
import "github.com/google/cadvisor/container"
-func (endpointConfig *EndpointConfig) configure(containerHandler container.ContainerHandler) {
- //If the exact URL was not specified, generate it based on the ip address of the container.
- endpoint := endpointConfig
- if endpoint.URL == "" {
+func (ec *EndpointConfig) configure(containerHandler container.ContainerHandler) {
+ // If the exact URL was not specified, generate it based on the ip address of the container.
+ if ec.URL == "" {
ipAddress := containerHandler.GetContainerIPAddress()
- endpointConfig.URL = endpoint.URLConfig.Protocol + "://" + ipAddress + ":" + endpoint.URLConfig.Port.String() + endpoint.URLConfig.Path
+ ec.URL = ec.URLConfig.Protocol + "://" + ipAddress + ":" + ec.URLConfig.Port.String() + ec.URLConfig.Path
}
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Unmarshal's a Containers description json file. The json file contains
// an array of ContainerHint structs, each with a container's id and networkInterface
// This allows collecting stats about network interfaces configured outside docker
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Handler for Docker containers.
package common
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package common
import (
"strings"
"time"
- "github.com/karrick/godirwalk"
"github.com/opencontainers/cgroups"
"golang.org/x/sys/unix"
// Lists all directories under "path" and outputs the results as children of "parent".
func ListDirectories(dirpath string, parent string, recursive bool, output map[string]struct{}) error {
- buf := make([]byte, godirwalk.MinimumScratchBufferSize)
- return listDirectories(dirpath, parent, recursive, output, buf)
-}
-
-func listDirectories(dirpath string, parent string, recursive bool, output map[string]struct{}, buf []byte) error {
- dirents, err := godirwalk.ReadDirents(dirpath, buf)
+ dirents, err := os.ReadDir(dirpath)
if err != nil {
// Ignore if this hierarchy does not exist.
if errors.Is(err, fs.ErrNotExist) {
- err = nil
+ return nil
}
return err
}
// List subcontainers if asked to.
if recursive {
- err := listDirectories(path.Join(dirpath, dirname), name, true, output, buf)
- if err != nil {
+ if err := ListDirectories(path.Join(dirpath, dirname), name, true, output); err != nil {
return err
}
}
stats.IoTime,
stats.IoWaitTime,
stats.Sectors,
+ stats.IoCostUsage,
+ stats.IoCostWait,
+ stats.IoCostIndebt,
+ stats.IoCostIndelay,
)
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package common
import (
ContainerTypeDocker
ContainerTypeCrio
ContainerTypeContainerd
- ContainerTypeMesos
ContainerTypePodman
)
// Returns the container's ip address, if available
GetContainerIPAddress() string
+ // GetExitCode returns the container's exit code if available.
+ // Returns an error if the container has not exited, exit codes are not supported
+ // for this handler type, or the container information is unavailable.
+ GetExitCode() (int, error)
+
// Returns whether the container still exists.
Exists() bool
type ContainerdClient interface {
LoadContainer(ctx context.Context, id string) (*containers.Container, error)
TaskPid(ctx context.Context, id string) (uint32, error)
+ LoadTaskProcess(ctx context.Context, id string) (*tasktypes.Process, error)
+ TaskExitStatus(ctx context.Context, id string) (uint32, error)
Version(ctx context.Context) (string, error)
}
var once sync.Once
var ctrdClient ContainerdClient = nil
+var ctrdClientErr error = nil
const (
maxBackoffDelay = 3 * time.Second
// Client creates a containerd client
func Client(address, namespace string) (ContainerdClient, error) {
- var retErr error
once.Do(func() {
tryConn, err := net.DialTimeout("unix", address, connectionTimeout)
if err != nil {
- retErr = fmt.Errorf("containerd: cannot unix dial containerd api service: %v", err)
+ ctrdClientErr = fmt.Errorf("containerd: cannot unix dial containerd api service: %v", err)
return
}
tryConn.Close()
//nolint:staticcheck // SA1019
conn, err := grpc.DialContext(ctx, dialer.DialAddress(address), gopts...)
if err != nil {
- retErr = err
+ ctrdClientErr = err
return
}
ctrdClient = &client{
versionService: versionapi.NewVersionClient(conn),
}
})
- return ctrdClient, retErr
+ return ctrdClient, ctrdClientErr
}
func (c *client) LoadContainer(ctx context.Context, id string) (*containers.Container, error) {
return response.Process.Pid, nil
}
+func (c *client) LoadTaskProcess(ctx context.Context, id string) (*tasktypes.Process, error) {
+ response, err := c.taskService.Get(ctx, &tasksapi.GetRequest{
+ ContainerID: id,
+ })
+ if err != nil {
+ return nil, errgrpc.ToNative(err)
+ }
+
+ return response.Process, nil
+}
+
+func (c *client) TaskExitStatus(ctx context.Context, id string) (uint32, error) {
+ response, err := c.taskService.Get(ctx, &tasksapi.GetRequest{
+ ContainerID: id,
+ })
+ if err != nil {
+ return 0, errgrpc.ToNative(err)
+ }
+ if response.Process.Status != tasktypes.Status_STOPPED {
+ return 0, fmt.Errorf("container %s has not exited (status: %v)", id, response.Process.Status)
+ }
+ return response.Process.ExitStatus, nil
+}
+
func (c *client) Version(ctx context.Context) (string, error) {
response, err := c.versionService.Version(ctx, &emptypb.Empty{})
if err != nil {
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package containerd
import (
+ "context"
"flag"
"fmt"
"path"
"regexp"
"strings"
- "golang.org/x/net/context"
"k8s.io/klog/v2"
"github.com/google/cadvisor/container"
package containerd
import (
- "github.com/google/cadvisor/container/containerd/namespaces"
- "golang.org/x/net/context"
+ "context"
+
"google.golang.org/grpc"
+
+ "github.com/google/cadvisor/container/containerd/namespaces"
)
type namespaceInterceptor struct {
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Handler for containerd containers.
package containerd
import (
+ "context"
"encoding/json"
"errors"
"fmt"
"github.com/containerd/errdefs"
"github.com/opencontainers/cgroups"
specs "github.com/opencontainers/runtime-spec/specs-go"
- "golang.org/x/net/context"
"github.com/google/cadvisor/container"
"github.com/google/cadvisor/container/common"
includedMetrics container.MetricSet
libcontainerHandler *containerlibcontainer.Handler
+ client ContainerdClient
}
var _ container.ContainerHandler = &containerdContainerHandler{}
includedMetrics: metrics,
reference: containerReference,
libcontainerHandler: libcontainerHandler,
+ client: client,
}
// Add the name and bare ID as aliases of the container.
handler.image = cntr.Image
// containerd doesnt take care of networking.So it doesnt maintain networking states
return ""
}
+
+func (h *containerdContainerHandler) GetExitCode() (int, error) {
+ ctx := context.Background()
+ exitStatus, err := h.client.TaskExitStatus(ctx, h.reference.Id)
+ if err != nil {
+ return -1, err
+ }
+ return int(exitStatus), nil
+}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// The install package registers containerd.NewPlugin() as the "containerd" container provider when imported
package install
// See the License for the specific language governing permissions and
// limitations under the License.
//go:build !windows
-// +build !windows
/*
Copyright The containerd Authors.
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package containerd
import (
func configureUnixTransport(tr *http.Transport, proto, addr string) error {
if len(addr) > maxUnixSocketPathSize {
- return fmt.Errorf("Unix socket path %q is too long", addr)
+ return fmt.Errorf("unix socket path %q is too long", addr)
}
// No need for compression in local communications.
tr.DisableCompression = true
if resp.StatusCode != http.StatusOK {
respBody, err := io.ReadAll(resp.Body)
if err != nil {
- return nil, fmt.Errorf("Error finding container %s: Status %d", id, resp.StatusCode)
+ return nil, fmt.Errorf("error finding container %s: status %d", id, resp.StatusCode)
}
- return nil, fmt.Errorf("Error finding container %s: Status %d returned error %s", id, resp.StatusCode, string(respBody))
+ return nil, fmt.Errorf("error finding container %s: status %d returned error %s", id, resp.StatusCode, string(respBody))
}
if err := json.NewDecoder(resp.Body).Decode(&cInfo); err != nil {
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package crio
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Handler for CRI-O containers.
package crio
func (h *crioContainerHandler) Type() container.ContainerType {
return container.ContainerTypeCrio
}
+
+func (h *crioContainerHandler) GetExitCode() (int, error) {
+ return -1, fmt.Errorf("exit code not available from CRI-O API")
+}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// The install package registers crio.NewPlugin() as the "crio" container provider when imported
package install
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package crio
import (
if _, found := plugins[name]; found {
return fmt.Errorf("Plugin %q was registered twice", name)
}
- klog.V(4).Infof("Registered Plugin %q", name)
plugins[name] = plugin
return nil
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package libcontainer
import (
func (h *Handler) schedulerStatsFromProcs() (info.CpuSchedstat, error) {
pids, err := h.cgroupManager.GetAllPids()
if err != nil {
- return info.CpuSchedstat{}, fmt.Errorf("Could not get PIDs for container %d: %w", h.pid, err)
+ return info.CpuSchedstat{}, fmt.Errorf("could not get PIDs for container %d: %w", h.pid, err)
}
alivePids := make(map[int]struct{}, len(pids))
for _, pid := range pids {
ret.Cpu.CFS.Periods = s.CpuStats.ThrottlingData.Periods
ret.Cpu.CFS.ThrottledPeriods = s.CpuStats.ThrottlingData.ThrottledPeriods
ret.Cpu.CFS.ThrottledTime = s.CpuStats.ThrottlingData.ThrottledTime
+ ret.Cpu.CFS.BurstsPeriods = s.CpuStats.BurstData.BurstsPeriods
+ ret.Cpu.CFS.BurstTime = s.CpuStats.BurstData.BurstTime
setPSIStats(s.CpuStats.PSI, &ret.Cpu.PSI)
if !withPerCPU {
ret.DiskIo.IoWaitTime = diskStatsCopy(s.BlkioStats.IoWaitTimeRecursive)
ret.DiskIo.IoMerged = diskStatsCopy(s.BlkioStats.IoMergedRecursive)
ret.DiskIo.IoTime = diskStatsCopy(s.BlkioStats.IoTimeRecursive)
+ ret.DiskIo.IoCostUsage = diskStatsCopy(s.BlkioStats.IoCostUsage)
+ ret.DiskIo.IoCostWait = diskStatsCopy(s.BlkioStats.IoCostWait)
+ ret.DiskIo.IoCostIndebt = diskStatsCopy(s.BlkioStats.IoCostIndebt)
+ ret.DiskIo.IoCostIndelay = diskStatsCopy(s.BlkioStats.IoCostIndelay)
setPSIStats(s.BlkioStats.PSI, &ret.DiskIo.PSI)
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package libcontainer
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package raw
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Handler for "raw" containers.
package raw
return container.ContainerTypeRaw
}
+func (h *rawContainerHandler) GetExitCode() (int, error) {
+ return -1, fmt.Errorf("exit codes not applicable for raw cgroup containers")
+}
+
type fsNamer struct {
fs []fs.Fs
factory info.MachineInfoFactory
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Package container defines types for sub-container events and also
// defines an interface for container operation handlers.
package raw
}
func (f *systemdFactory) NewContainerHandler(name string, metadataEnvAllowList []string, inHostNamespace bool) (container.ContainerHandler, error) {
- return nil, fmt.Errorf("Not yet supported")
+ return nil, fmt.Errorf("not yet supported")
}
func (f *systemdFactory) CanHandleAndAccept(name string) (bool, bool, error) {
output, err := exec.Command(c.thinLsPath, args...).Output()
if err != nil {
- return nil, fmt.Errorf("Error running command `thin_ls %v`: %v\noutput:\n\n%v", strings.Join(args, " "), err, string(output))
+ return nil, fmt.Errorf("error running command `thin_ls %v`: %v\noutput:\n\n%v", strings.Join(args, " "), err, string(output))
}
return parseThinLsOutput(output), nil
import (
"fmt"
"strings"
- "sync"
+ "sync/atomic"
"time"
"k8s.io/klog/v2"
)
+// usageCache is a typed wrapper around atomic.Value that eliminates the need
+// for type assertions at every call site. It stores device ID strings mapped
+// to usage values (uint64).
+type usageCache struct {
+ v atomic.Value
+}
+
+// Load retrieves the current cache map.
+func (c *usageCache) Load() map[string]uint64 {
+ return c.v.Load().(map[string]uint64)
+}
+
+// Store saves a new cache map.
+func (c *usageCache) Store(m map[string]uint64) {
+ c.v.Store(m)
+}
+
// ThinPoolWatcher maintains a cache of device name -> usage stats for a
// devicemapper thin-pool using thin_ls.
type ThinPoolWatcher struct {
poolName string
metadataDevice string
- lock *sync.RWMutex
- cache map[string]uint64
+ cache usageCache
period time.Duration
stopChan chan struct{}
dmsetup DmsetupClient
return nil, fmt.Errorf("encountered error creating thin_ls client: %v", err)
}
- return &ThinPoolWatcher{poolName: poolName,
+ w := &ThinPoolWatcher{
+ poolName: poolName,
metadataDevice: metadataDevice,
- lock: &sync.RWMutex{},
- cache: make(map[string]uint64),
period: 15 * time.Second,
stopChan: make(chan struct{}),
dmsetup: NewDmsetupClient(),
thinLsClient: thinLsClient,
- }, nil
+ }
+ w.cache.Store(map[string]uint64{})
+ return w, nil
}
// Start starts the ThinPoolWatcher.
// GetUsage gets the cached usage value of the given device.
func (w *ThinPoolWatcher) GetUsage(deviceID string) (uint64, error) {
- w.lock.RLock()
- defer w.lock.RUnlock()
-
- v, ok := w.cache[deviceID]
+ cache := w.cache.Load()
+ v, ok := cache[deviceID]
if !ok {
return 0, fmt.Errorf("no cached value for usage of device %v", deviceID)
}
-
return v, nil
}
// Refresh performs a `thin_ls` of the pool being watched and refreshes the
// cached data with the result.
func (w *ThinPoolWatcher) Refresh() error {
- w.lock.Lock()
- defer w.lock.Unlock()
-
currentlyReserved, err := w.checkReservation(w.poolName)
if err != nil {
err = fmt.Errorf("error determining whether snapshot is reserved: %v", err)
return err
}
- w.cache = newCache
+ w.cache.Store(newCache)
return nil
}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/btrfs"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("btrfs", btrfs.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register btrfs fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package btrfs
+
+import (
+ "fmt"
+ "syscall"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+// major extracts the major device number from a device number.
+func major(devNumber uint64) uint {
+ return uint((devNumber >> 8) & 0xfff)
+}
+
+// minor extracts the minor device number from a device number.
+func minor(devNumber uint64) uint {
+ return uint((devNumber & 0xff) | ((devNumber >> 12) & 0xfff00))
+}
+
+// GetBtrfsMajorMinorIds gets the major and minor device IDs for a btrfs mount point.
+// This is a workaround for wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
+// Instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point.
+func GetBtrfsMajorMinorIds(mnt *mount.Info) (int, int, error) {
+ buf := new(syscall.Stat_t)
+ err := syscall.Stat(mnt.Source, buf)
+ if err != nil {
+ err = fmt.Errorf("stat failed on %s with error: %s", mnt.Source, err)
+ return 0, 0, err
+ }
+
+ klog.V(4).Infof("btrfs mount %#v", mnt)
+ if buf.Mode&syscall.S_IFMT == syscall.S_IFBLK {
+ err := syscall.Stat(mnt.Mountpoint, buf)
+ if err != nil {
+ err = fmt.Errorf("stat failed on %s with error: %s", mnt.Mountpoint, err)
+ return 0, 0, err
+ }
+
+ // The type Dev and Rdev in Stat_t are 32bit on mips.
+ klog.V(4).Infof("btrfs dev major:minor %d:%d\n", int(major(uint64(buf.Dev))), int(minor(uint64(buf.Dev)))) // nolint: unconvert
+ klog.V(4).Infof("btrfs rdev major:minor %d:%d\n", int(major(uint64(buf.Rdev))), int(minor(uint64(buf.Rdev)))) // nolint: unconvert
+
+ return int(major(uint64(buf.Dev))), int(minor(uint64(buf.Dev))), nil // nolint: unconvert
+ }
+ return 0, 0, fmt.Errorf("%s is not a block device", mnt.Source)
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package btrfs
+
+import (
+ "strings"
+
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/vfs"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+type btrfsPlugin struct{}
+
+// NewPlugin creates a new Btrfs filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &btrfsPlugin{}
+}
+
+func (p *btrfsPlugin) Name() string {
+ return "btrfs"
+}
+
+// CanHandle returns true if the filesystem type is btrfs.
+func (p *btrfsPlugin) CanHandle(fsType string) bool {
+ return fsType == "btrfs"
+}
+
+// Priority returns 100 - Btrfs has higher priority than VFS.
+func (p *btrfsPlugin) Priority() int {
+ return 100
+}
+
+// GetStats returns filesystem statistics for Btrfs.
+// Btrfs delegates to VFS for stats collection.
+func (p *btrfsPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ // Btrfs uses VFS stats
+ capacity, free, avail, inodes, inodesFree, err := vfs.GetVfsStats(partition.Mountpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Inodes: &inodes,
+ InodesFree: &inodesFree,
+ Type: fs.VFS,
+ }, nil
+}
+
+// ProcessMount handles Btrfs mount processing.
+// Btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
+// Instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point.
+func (p *btrfsPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ // Only apply fix if Major is 0 and Source starts with /dev/
+ if mnt.Major == 0 && strings.HasPrefix(mnt.Source, "/dev/") {
+ major, minor, err := GetBtrfsMajorMinorIds(mnt)
+ if err != nil {
+ klog.Warningf("%s", err)
+ } else {
+ // Create a copy with corrected values
+ correctedMnt := *mnt
+ correctedMnt.Major = major
+ correctedMnt.Minor = minor
+ return true, &correctedMnt, nil
+ }
+ }
+ return true, mnt, nil
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/devicemapper"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("devicemapper", devicemapper.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register devicemapper fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package devicemapper
+
+import (
+ "github.com/google/cadvisor/fs"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+type dmPlugin struct{}
+
+// NewPlugin creates a new DeviceMapper filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &dmPlugin{}
+}
+
+func (p *dmPlugin) Name() string {
+ return "devicemapper"
+}
+
+// CanHandle returns true if the filesystem type is devicemapper.
+func (p *dmPlugin) CanHandle(fsType string) bool {
+ return fsType == "devicemapper"
+}
+
+// Priority returns 100 - DeviceMapper has higher priority than VFS.
+func (p *dmPlugin) Priority() int {
+ return 100
+}
+
+// GetStats returns filesystem statistics for DeviceMapper thin provisioning.
+func (p *dmPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ capacity, free, avail, err := GetDMStats(device, partition.BlockSize)
+ if err != nil {
+ return nil, err
+ }
+
+ klog.V(5).Infof("got devicemapper fs capacity stats: capacity: %v free: %v available: %v", capacity, free, avail)
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Type: fs.DeviceMapper,
+ }, nil
+}
+
+// ProcessMount handles DeviceMapper mount processing.
+// For DeviceMapper, no special processing is needed.
+func (p *dmPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ return true, mnt, nil
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package devicemapper
+
+import (
+ "fmt"
+ "os/exec"
+ "strconv"
+ "strings"
+
+ dm "github.com/google/cadvisor/devicemapper"
+)
+
+// GetDMStats returns devicemapper thin provisioning stats.
+func GetDMStats(poolName string, dataBlkSize uint) (uint64, uint64, uint64, error) {
+ out, err := exec.Command("dmsetup", "status", poolName).Output()
+ if err != nil {
+ return 0, 0, 0, err
+ }
+
+ used, total, err := parseDMStatus(string(out))
+ if err != nil {
+ return 0, 0, 0, err
+ }
+
+ used *= 512 * uint64(dataBlkSize)
+ total *= 512 * uint64(dataBlkSize)
+ free := total - used
+
+ return total, free, free, nil
+}
+
+// parseDMStatus parses the output of `dmsetup status`.
+func parseDMStatus(dmStatus string) (uint64, uint64, error) {
+ dmStatus = strings.Replace(dmStatus, "/", " ", -1)
+ dmFields := strings.Fields(dmStatus)
+
+ if len(dmFields) < 8 {
+ return 0, 0, fmt.Errorf("invalid dmsetup status output: %s", dmStatus)
+ }
+
+ used, err := strconv.ParseUint(dmFields[6], 10, 64)
+ if err != nil {
+ return 0, 0, err
+ }
+ total, err := strconv.ParseUint(dmFields[7], 10, 64)
+ if err != nil {
+ return 0, 0, err
+ }
+
+ return used, total, nil
+}
+
+// parseDMTable parses a single line of `dmsetup table` output and returns the
+// major device, minor device, block size, and an error.
+func ParseDMTable(dmTable string) (uint, uint, uint, error) {
+ dmTable = strings.Replace(dmTable, ":", " ", -1)
+ dmFields := strings.Fields(dmTable)
+
+ if len(dmFields) < 8 {
+ return 0, 0, 0, fmt.Errorf("invalid dmsetup status output: %s", dmTable)
+ }
+
+ major, err := strconv.ParseUint(dmFields[5], 10, 32)
+ if err != nil {
+ return 0, 0, 0, err
+ }
+ minor, err := strconv.ParseUint(dmFields[6], 10, 32)
+ if err != nil {
+ return 0, 0, 0, err
+ }
+ dataBlkSize, err := strconv.ParseUint(dmFields[7], 10, 32)
+ if err != nil {
+ return 0, 0, 0, err
+ }
+
+ return uint(major), uint(minor), uint(dataBlkSize), nil
+}
+
+// DockerDMDevice returns information about the devicemapper device and "partition" if
+// docker is using devicemapper for its storage driver.
+func DockerDMDevice(driverStatus map[string]string, dmsetup dm.DmsetupClient) (string, uint, uint, uint, error) {
+ const driverStatusPoolName = "Pool Name"
+
+ poolName, ok := driverStatus[driverStatusPoolName]
+ if !ok || len(poolName) == 0 {
+ return "", 0, 0, 0, fmt.Errorf("could not get dm pool name")
+ }
+
+ out, err := dmsetup.Table(poolName)
+ if err != nil {
+ return "", 0, 0, 0, err
+ }
+
+ major, minor, dataBlkSize, err := ParseDMTable(string(out))
+ if err != nil {
+ return "", 0, 0, 0, err
+ }
+
+ return poolName, major, minor, dataBlkSize, nil
+}
// limitations under the License.
//go:build linux
-// +build linux
// Provides Filesystem Stats
package fs
import (
"bufio"
- "context"
+ "errors"
"fmt"
"os"
- "os/exec"
"path"
"path/filepath"
"regexp"
"strconv"
"strings"
"syscall"
- "time"
- zfs "github.com/mistifyio/go-zfs"
mount "github.com/moby/sys/mountinfo"
"github.com/google/cadvisor/devicemapper"
- "github.com/google/cadvisor/utils"
"k8s.io/klog/v2"
)
func processMounts(mounts []*mount.Info, excludedMountpointPrefixes []string) map[string]partition {
partitions := make(map[string]partition)
- supportedFsType := map[string]bool{
- // all ext and nfs systems are checked through prefix
- // because there are a number of families (e.g., ext3, ext4, nfs3, nfs4...)
- "btrfs": true,
- "overlay": true,
- "tmpfs": true,
- "xfs": true,
- "zfs": true,
- }
-
for _, mnt := range mounts {
- if !strings.HasPrefix(mnt.FSType, "ext") && !strings.HasPrefix(mnt.FSType, "nfs") &&
- !supportedFsType[mnt.FSType] {
+ // Use plugin system to determine if filesystem is supported
+ plugin := GetPluginForFsType(mnt.FSType)
+ if plugin == nil {
continue
}
- // Avoid bind mounts, exclude tmpfs.
+
+ // Avoid bind mounts, but allow tmpfs duplicates (handled by plugin's ProcessMount)
if _, ok := partitions[mnt.Source]; ok {
if mnt.FSType != "tmpfs" {
continue
}
}
+ // Check for excluded mountpoint prefixes
hasPrefix := false
for _, prefix := range excludedMountpointPrefixes {
if strings.HasPrefix(mnt.Mountpoint, prefix) {
continue
}
- // using mountpoint to replace device once fstype it tmpfs
- if mnt.FSType == "tmpfs" {
- mnt.Source = mnt.Mountpoint
- }
- // btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
- // instead of using values from /proc/self/mountinfo we use stat to get Ids from btrfs mount point
- if mnt.FSType == "btrfs" && mnt.Major == 0 && strings.HasPrefix(mnt.Source, "/dev/") {
- major, minor, err := getBtrfsMajorMinorIds(mnt)
- if err != nil {
- klog.Warningf("%s", err)
- } else {
- mnt.Major = major
- mnt.Minor = minor
- }
+ // Let plugin process the mount (handles filesystem-specific modifications)
+ include, processedMnt, err := plugin.ProcessMount(mnt)
+ if err != nil {
+ klog.Warningf("error processing mount for %s: %v", mnt.FSType, err)
+ continue
}
-
- // overlay fix: Making mount source unique for all overlay mounts, using the mount's major and minor ids.
- if mnt.FSType == "overlay" {
- mnt.Source = fmt.Sprintf("%s_%d-%d", mnt.Source, mnt.Major, mnt.Minor)
+ if !include {
+ continue
}
- partitions[mnt.Source] = partition{
- fsType: mnt.FSType,
- mountpoint: mnt.Mountpoint,
- major: uint(mnt.Major),
- minor: uint(mnt.Minor),
+ partitions[processedMnt.Source] = partition{
+ fsType: processedMnt.FSType,
+ mountpoint: processedMnt.Mountpoint,
+ major: uint(processedMnt.Major),
+ minor: uint(processedMnt.Minor),
}
}
if err != nil {
return nil, err
}
- nfsInfo := make(map[string]Fs, 0)
+ // statsCache stores cached filesystem stats by cache key for plugins that implement FsCachingPlugin
+ statsCache := make(map[string]Fs)
for device, partition := range i.partitions {
_, hasMount := mountSet[partition.mountpoint]
_, hasDevice := deviceSet[device]
if mountSet == nil || (hasMount && !hasDevice) {
var (
- err error
- fs Fs
+ statsErr error
+ fs Fs
)
- fsType := partition.fsType
- if strings.HasPrefix(partition.fsType, "nfs") {
- fsType = "nfs"
+
+ // Use plugin system to get filesystem stats
+ plugin := GetPluginForFsType(partition.fsType)
+ if plugin == nil {
+ klog.V(4).Infof("no plugin found for filesystem type: %v", partition.fsType)
+ continue
}
- switch fsType {
- case DeviceMapper.String():
- fs.Capacity, fs.Free, fs.Available, err = getDMStats(device, partition.blockSize)
- klog.V(5).Infof("got devicemapper fs capacity stats: capacity: %v free: %v available: %v:", fs.Capacity, fs.Free, fs.Available)
- fs.Type = DeviceMapper
- case ZFS.String():
- if _, devzfs := os.Stat("/dev/zfs"); os.IsExist(devzfs) {
- fs.Capacity, fs.Free, fs.Available, err = getZfstats(device)
- fs.Type = ZFS
- break
- }
- // if /dev/zfs is not present default to VFS
- fallthrough
- case NFS.String():
- devId := fmt.Sprintf("%d:%d", partition.major, partition.minor)
- if v, ok := nfsInfo[devId]; ok {
- fs = v
- break
+
+ partInfo := PartitionInfo{
+ Mountpoint: partition.mountpoint,
+ Major: partition.major,
+ Minor: partition.minor,
+ FsType: partition.fsType,
+ BlockSize: partition.blockSize,
+ }
+
+ // Check if plugin supports caching and if we have a cached value
+ var cacheKey string
+ if cachingPlugin, ok := plugin.(FsCachingPlugin); ok {
+ cacheKey = cachingPlugin.CacheKey(partInfo)
+ if cacheKey != "" {
+ if cachedFs, found := statsCache[cacheKey]; found {
+ fs = cachedFs
+ // Skip stats fetching, use cached value
+ deviceSet[device] = struct{}{}
+ fs.DeviceInfo = DeviceInfo{
+ Device: device,
+ Major: uint(partition.major),
+ Minor: uint(partition.minor),
+ }
+ if val, ok := diskStatsMap[device]; ok {
+ fs.DiskStats = val
+ } else {
+ for k, v := range diskStatsMap {
+ if v.MajorNum == uint64(partition.major) && v.MinorNum == uint64(partition.minor) {
+ fs.DiskStats = diskStatsMap[k]
+ break
+ }
+ }
+ }
+ filesystems = append(filesystems, fs)
+ continue
+ }
}
- var inodes, inodesFree uint64
- fs.Capacity, fs.Free, fs.Available, inodes, inodesFree, err = getVfsStats(partition.mountpoint)
- if err != nil {
- klog.V(4).Infof("the file system type is %s, partition mountpoint does not exist: %v, error: %v", partition.fsType, partition.mountpoint, err)
- break
+ }
+
+ stats, statsErr := plugin.GetStats(device, partInfo)
+ if statsErr != nil {
+ // Handle fallback to VFS for plugins that request it
+ if errors.Is(statsErr, ErrFallbackToVFS) {
+ vfsPlugin := GetPluginForFsType("ext4") // VFS handles ext*
+ if vfsPlugin != nil {
+ stats, statsErr = vfsPlugin.GetStats(device, partInfo)
+ }
}
- fs.Inodes = &inodes
- fs.InodesFree = &inodesFree
- fs.Type = VFS
- nfsInfo[devId] = fs
- default:
- var inodes, inodesFree uint64
- if utils.FileExists(partition.mountpoint) {
- fs.Capacity, fs.Free, fs.Available, inodes, inodesFree, err = getVfsStats(partition.mountpoint)
- fs.Inodes = &inodes
- fs.InodesFree = &inodesFree
- fs.Type = VFS
- } else {
- klog.V(4).Infof("unable to determine file system type, partition mountpoint does not exist: %v", partition.mountpoint)
+ if statsErr != nil {
+ klog.V(4).Infof("Stat fs failed for %s. Error: %v", partition.fsType, statsErr)
+ continue
}
}
- if err != nil {
- klog.V(4).Infof("Stat fs failed. Error: %v", err)
- } else {
- deviceSet[device] = struct{}{}
- fs.DeviceInfo = DeviceInfo{
- Device: device,
- Major: uint(partition.major),
- Minor: uint(partition.minor),
- }
- if val, ok := diskStatsMap[device]; ok {
- fs.DiskStats = val
- } else {
- for k, v := range diskStatsMap {
- if v.MajorNum == uint64(partition.major) && v.MinorNum == uint64(partition.minor) {
- fs.DiskStats = diskStatsMap[k]
- break
- }
+ if stats == nil {
+ klog.V(4).Infof("no stats returned for %s at %s", partition.fsType, partition.mountpoint)
+ continue
+ }
+
+ fs.Capacity = stats.Capacity
+ fs.Free = stats.Free
+ fs.Available = stats.Available
+ fs.Inodes = stats.Inodes
+ fs.InodesFree = stats.InodesFree
+ fs.Type = stats.Type
+
+ // Store in cache if plugin supports caching
+ if cacheKey != "" {
+ statsCache[cacheKey] = fs
+ }
+
+ deviceSet[device] = struct{}{}
+ fs.DeviceInfo = DeviceInfo{
+ Device: device,
+ Major: uint(partition.major),
+ Minor: uint(partition.minor),
+ }
+
+ if val, ok := diskStatsMap[device]; ok {
+ fs.DiskStats = val
+ } else {
+ for k, v := range diskStatsMap {
+ if v.MajorNum == uint64(partition.major) && v.MinorNum == uint64(partition.minor) {
+ fs.DiskStats = diskStatsMap[k]
+ break
}
}
- filesystems = append(filesystems, fs)
}
+ filesystems = append(filesystems, fs)
}
}
return filesystems, nil
return GetDirUsage(dir)
}
-func getVfsStats(path string) (total uint64, free uint64, avail uint64, inodes uint64, inodesFree uint64, err error) {
- // timeout the context with, default is 2sec
- timeout := 2
- ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeout)*time.Second)
- defer cancel()
-
- type result struct {
- total uint64
- free uint64
- avail uint64
- inodes uint64
- inodesFree uint64
- err error
- }
-
- resultChan := make(chan result, 1)
-
- go func() {
- var s syscall.Statfs_t
- if err = syscall.Statfs(path, &s); err != nil {
- total, free, avail, inodes, inodesFree = 0, 0, 0, 0, 0
- }
- total = uint64(s.Frsize) * s.Blocks
- free = uint64(s.Frsize) * s.Bfree
- avail = uint64(s.Frsize) * s.Bavail
- inodes = uint64(s.Files)
- inodesFree = uint64(s.Ffree)
- resultChan <- result{total: total, free: free, avail: avail, inodes: inodes, inodesFree: inodesFree, err: err}
- }()
-
- select {
- case <-ctx.Done():
- return 0, 0, 0, 0, 0, ctx.Err()
- case res := <-resultChan:
- return res.total, res.free, res.avail, res.inodes, res.inodesFree, res.err
- }
-}
-
// Devicemapper thin provisioning is detailed at
// https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt
func dockerDMDevice(driverStatus map[string]string, dmsetup devicemapper.DmsetupClient) (string, uint, uint, uint, error) {
poolName, ok := driverStatus[DriverStatusPoolName]
if !ok || len(poolName) == 0 {
- return "", 0, 0, 0, fmt.Errorf("Could not get dm pool name")
+ return "", 0, 0, 0, fmt.Errorf("could not get dm pool name")
}
out, err := dmsetup.Table(poolName)
dmFields := strings.Fields(dmTable)
if len(dmFields) < 8 {
- return 0, 0, 0, fmt.Errorf("Invalid dmsetup status output: %s", dmTable)
+ return 0, 0, 0, fmt.Errorf("invalid dmsetup status output: %s", dmTable)
}
major, err := strconv.ParseUint(dmFields[5], 10, 32)
return uint(major), uint(minor), uint(dataBlkSize), nil
}
-func getDMStats(poolName string, dataBlkSize uint) (uint64, uint64, uint64, error) {
- out, err := exec.Command("dmsetup", "status", poolName).Output()
- if err != nil {
- return 0, 0, 0, err
- }
-
- used, total, err := parseDMStatus(string(out))
- if err != nil {
- return 0, 0, 0, err
- }
-
- used *= 512 * uint64(dataBlkSize)
- total *= 512 * uint64(dataBlkSize)
- free := total - used
-
- return total, free, free, nil
-}
-
-func parseDMStatus(dmStatus string) (uint64, uint64, error) {
- dmStatus = strings.Replace(dmStatus, "/", " ", -1)
- dmFields := strings.Fields(dmStatus)
-
- if len(dmFields) < 8 {
- return 0, 0, fmt.Errorf("Invalid dmsetup status output: %s", dmStatus)
- }
-
- used, err := strconv.ParseUint(dmFields[6], 10, 64)
- if err != nil {
- return 0, 0, err
- }
- total, err := strconv.ParseUint(dmFields[7], 10, 64)
- if err != nil {
- return 0, 0, err
- }
-
- return used, total, nil
-}
-
-// getZfstats returns ZFS mount stats using zfsutils
-func getZfstats(poolName string) (uint64, uint64, uint64, error) {
- dataset, err := zfs.GetDataset(poolName)
- if err != nil {
- return 0, 0, 0, err
- }
-
- total := dataset.Used + dataset.Avail + dataset.Usedbydataset
-
- return total, dataset.Avail, dataset.Avail, nil
-}
-
// Get major and minor Ids for a mount point using btrfs as filesystem.
func getBtrfsMajorMinorIds(mount *mount.Info) (int, int, error) {
// btrfs fix: following workaround fixes wrong btrfs Major and Minor Ids reported in /proc/self/mountinfo.
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/nfs"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("nfs", nfs.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register nfs fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package nfs
+
+import (
+ "fmt"
+ "strings"
+
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/vfs"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+type nfsPlugin struct{}
+
+// Ensure nfsPlugin implements FsCachingPlugin
+var _ fs.FsCachingPlugin = &nfsPlugin{}
+
+// NewPlugin creates a new NFS filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &nfsPlugin{}
+}
+
+func (p *nfsPlugin) Name() string {
+ return "nfs"
+}
+
+// CanHandle returns true if the filesystem type is NFS (nfs, nfs3, nfs4, etc.).
+func (p *nfsPlugin) CanHandle(fsType string) bool {
+ return strings.HasPrefix(fsType, "nfs")
+}
+
+// Priority returns 50 - NFS has medium priority (higher than VFS but lower than specific plugins).
+func (p *nfsPlugin) Priority() int {
+ return 50
+}
+
+// GetStats returns filesystem statistics for NFS.
+// NFS uses VFS stats.
+func (p *nfsPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ capacity, free, avail, inodes, inodesFree, err := vfs.GetVfsStats(partition.Mountpoint)
+ if err != nil {
+ klog.V(4).Infof("the file system type is %s, partition mountpoint does not exist: %v, error: %v",
+ partition.FsType, partition.Mountpoint, err)
+ return nil, err
+ }
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Inodes: &inodes,
+ InodesFree: &inodesFree,
+ Type: fs.VFS,
+ }, nil
+}
+
+// ProcessMount handles NFS mount processing.
+// For NFS, no special processing is needed.
+func (p *nfsPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ return true, mnt, nil
+}
+
+// CacheKey returns a cache key based on device ID (major:minor).
+// NFS mounts with the same device ID share the same underlying filesystem,
+// so we can cache stats to avoid redundant statfs calls.
+func (p *nfsPlugin) CacheKey(partition fs.PartitionInfo) string {
+ return fmt.Sprintf("%d:%d", partition.Major, partition.Minor)
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/overlay"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("overlay", overlay.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register overlay fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package overlay
+
+import (
+ "fmt"
+
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/vfs"
+
+ mount "github.com/moby/sys/mountinfo"
+)
+
+type overlayPlugin struct{}
+
+// NewPlugin creates a new Overlay filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &overlayPlugin{}
+}
+
+func (p *overlayPlugin) Name() string {
+ return "overlay"
+}
+
+// CanHandle returns true if the filesystem type is overlay or overlay2.
+func (p *overlayPlugin) CanHandle(fsType string) bool {
+ return fsType == "overlay"
+}
+
+// Priority returns 100 - Overlay has higher priority than VFS.
+func (p *overlayPlugin) Priority() int {
+ return 100
+}
+
+// GetStats returns filesystem statistics for Overlay.
+// Overlay delegates to VFS for stats collection.
+func (p *overlayPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ // Overlay uses VFS stats
+ capacity, free, avail, inodes, inodesFree, err := vfs.GetVfsStats(partition.Mountpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Inodes: &inodes,
+ InodesFree: &inodesFree,
+ Type: fs.VFS,
+ }, nil
+}
+
+// ProcessMount handles Overlay mount processing.
+// Overlay fix: Making mount source unique for all overlay mounts, using the mount's major and minor ids.
+// This is needed because multiple overlay mounts can have the same source.
+func (p *overlayPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ // Create a copy with unique source
+ correctedMnt := *mnt
+ correctedMnt.Source = fmt.Sprintf("%s_%d-%d", mnt.Source, mnt.Major, mnt.Minor)
+ return true, &correctedMnt, nil
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package fs
+
+import (
+ "errors"
+ "fmt"
+ "sync"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+// FsPlugin provides filesystem-specific statistics collection.
+type FsPlugin interface {
+ // Name returns the plugin identifier (e.g., "zfs", "devicemapper", "vfs").
+ Name() string
+
+ // CanHandle returns true if this plugin handles the given filesystem type.
+ CanHandle(fsType string) bool
+
+ // Priority returns the plugin priority (higher = checked first).
+ // Allows specific plugins (zfs, btrfs) to override generic (vfs).
+ Priority() int
+
+ // GetStats returns filesystem statistics for a partition.
+ GetStats(device string, partition PartitionInfo) (*FsStats, error)
+
+ // ProcessMount optionally modifies mount info during processing.
+ // Returns (shouldInclude bool, modifiedMount *mount.Info, error).
+ ProcessMount(mnt *mount.Info) (bool, *mount.Info, error)
+}
+
+// FsCachingPlugin is an optional interface for plugins that want to cache
+// stats by a key (e.g., device ID) to avoid redundant stat calls.
+// This is useful for network filesystems like NFS where multiple mounts
+// may point to the same underlying device.
+type FsCachingPlugin interface {
+ FsPlugin
+
+ // CacheKey returns a cache key for the given partition.
+ // Stats will be cached by this key and reused for partitions with the same key.
+ // Return empty string to disable caching for a specific partition.
+ CacheKey(partition PartitionInfo) string
+}
+
+// FsWatcherPlugin is an optional interface for plugins that provide
+// background monitoring (e.g., ZFS watcher, ThinPool watcher).
+type FsWatcherPlugin interface {
+ FsPlugin
+
+ // StartWatcher starts background monitoring.
+ // Returns a Watcher that can be used to get container-level usage.
+ StartWatcher() (FsWatcher, error)
+}
+
+// FsWatcher provides container-level filesystem usage from background monitoring.
+type FsWatcher interface {
+ // GetUsage returns filesystem usage for a specific container/path.
+ GetUsage(containerID string, deviceID string) (uint64, error)
+
+ // Stop stops the background monitoring.
+ Stop()
+}
+
+// PartitionInfo contains information needed for stats collection.
+type PartitionInfo struct {
+ Mountpoint string
+ Major uint
+ Minor uint
+ FsType string
+ BlockSize uint
+}
+
+// FsStats contains filesystem statistics returned by plugins.
+type FsStats struct {
+ Capacity uint64
+ Free uint64
+ Available uint64
+ Inodes *uint64
+ InodesFree *uint64
+ Type FsType
+}
+
+// ErrFallbackToVFS signals that a specialized plugin cannot handle
+// this filesystem and VFS should be used instead.
+var ErrFallbackToVFS = errors.New("fallback to VFS")
+
+// Plugin registry (init-time registration only).
+var (
+ pluginsLock sync.RWMutex
+ plugins = make(map[string]FsPlugin)
+)
+
+// RegisterPlugin registers a filesystem plugin.
+// This should be called from init() functions.
+func RegisterPlugin(name string, plugin FsPlugin) error {
+ pluginsLock.Lock()
+ defer pluginsLock.Unlock()
+ if _, found := plugins[name]; found {
+ return fmt.Errorf("FsPlugin %q was registered twice", name)
+ }
+ klog.V(4).Infof("Registered FsPlugin %q", name)
+ plugins[name] = plugin
+ return nil
+}
+
+// GetPluginForFsType returns the appropriate plugin for the filesystem type.
+// Returns nil if no plugin can handle the filesystem type.
+func GetPluginForFsType(fsType string) FsPlugin {
+ pluginsLock.RLock()
+ defer pluginsLock.RUnlock()
+
+ var best FsPlugin
+ for _, p := range plugins {
+ if p.CanHandle(fsType) {
+ if best == nil || p.Priority() > best.Priority() {
+ best = p
+ }
+ }
+ }
+ return best
+}
+
+// GetAllPlugins returns all registered plugins.
+func GetAllPlugins() []FsPlugin {
+ pluginsLock.RLock()
+ defer pluginsLock.RUnlock()
+
+ result := make([]FsPlugin, 0, len(plugins))
+ for _, p := range plugins {
+ result = append(result, p)
+ }
+ return result
+}
+
+// InitializeWatchers starts all plugin watchers and returns them.
+func InitializeWatchers() map[string]FsWatcher {
+ pluginsLock.RLock()
+ defer pluginsLock.RUnlock()
+
+ watchers := make(map[string]FsWatcher)
+ for name, plugin := range plugins {
+ if wp, ok := plugin.(FsWatcherPlugin); ok {
+ watcher, err := wp.StartWatcher()
+ if err != nil {
+ klog.V(4).Infof("Failed to start watcher for plugin %s: %v", name, err)
+ continue
+ }
+ if watcher != nil {
+ watchers[name] = watcher
+ klog.V(4).Infof("Started watcher for FsPlugin %q", name)
+ }
+ }
+ }
+ return watchers
+}
+
+// StopWatchers stops all provided watchers.
+func StopWatchers(watchers map[string]FsWatcher) {
+ for name, watcher := range watchers {
+ if watcher != nil {
+ watcher.Stop()
+ klog.V(4).Infof("Stopped watcher for FsPlugin %q", name)
+ }
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/tmpfs"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("tmpfs", tmpfs.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register tmpfs fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package tmpfs
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/vfs"
+
+ mount "github.com/moby/sys/mountinfo"
+)
+
+type tmpfsPlugin struct{}
+
+// NewPlugin creates a new tmpfs filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &tmpfsPlugin{}
+}
+
+func (p *tmpfsPlugin) Name() string {
+ return "tmpfs"
+}
+
+// CanHandle returns true if the filesystem type is tmpfs.
+func (p *tmpfsPlugin) CanHandle(fsType string) bool {
+ return fsType == "tmpfs"
+}
+
+// Priority returns 100 - tmpfs has higher priority than VFS.
+func (p *tmpfsPlugin) Priority() int {
+ return 100
+}
+
+// GetStats returns filesystem statistics for tmpfs.
+// tmpfs delegates to VFS for stats collection.
+func (p *tmpfsPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ // tmpfs uses VFS stats
+ capacity, free, avail, inodes, inodesFree, err := vfs.GetVfsStats(partition.Mountpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Inodes: &inodes,
+ InodesFree: &inodesFree,
+ Type: fs.VFS,
+ }, nil
+}
+
+// ProcessMount handles tmpfs mount processing.
+// For tmpfs, we use the mountpoint as the source to make each mount unique.
+// This allows multiple tmpfs mounts with the same "tmpfs" source to coexist.
+func (p *tmpfsPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ // Use mountpoint as source to make each tmpfs mount unique
+ correctedMnt := *mnt
+ correctedMnt.Source = mnt.Mountpoint
+ return true, &correctedMnt, nil
+}
+
+// AllowDuplicateSource returns true for tmpfs since multiple tmpfs mounts
+// should be tracked separately even if they appear to have the same source.
+func (p *tmpfsPlugin) AllowDuplicateSource() bool {
+ return true
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package install
+
+import (
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/fs/vfs"
+
+ "k8s.io/klog/v2"
+)
+
+func init() {
+ err := fs.RegisterPlugin("vfs", vfs.NewPlugin())
+ if err != nil {
+ klog.Fatalf("Failed to register vfs fs plugin: %v", err)
+ }
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package vfs
+
+import (
+ "strings"
+
+ "github.com/google/cadvisor/fs"
+ "github.com/google/cadvisor/utils"
+
+ mount "github.com/moby/sys/mountinfo"
+ "k8s.io/klog/v2"
+)
+
+type vfsPlugin struct{}
+
+// NewPlugin creates a new VFS filesystem plugin.
+func NewPlugin() fs.FsPlugin {
+ return &vfsPlugin{}
+}
+
+func (p *vfsPlugin) Name() string {
+ return "vfs"
+}
+
+// CanHandle returns true for standard filesystems that use VFS stats.
+// This includes ext2/3/4, xfs, and similar block-based filesystems.
+// Virtual/pseudo filesystems (proc, sysfs, cgroup, etc.) are excluded.
+func (p *vfsPlugin) CanHandle(fsType string) bool {
+ // Exclude virtual/pseudo filesystems that don't have real disk backing
+ switch fsType {
+ case "cgroup", "cgroup2", "cpuset", "mqueue", "proc", "sysfs",
+ "devtmpfs", "devpts", "securityfs", "debugfs", "tracefs",
+ "pstore", "configfs", "fusectl", "hugetlbfs", "autofs",
+ "binfmt_misc", "efivarfs", "rpc_pipefs", "nsfs":
+ return false
+ }
+
+ // VFS can handle most standard Linux filesystems
+ if strings.HasPrefix(fsType, "ext") {
+ return true
+ }
+ switch fsType {
+ case "xfs", "squashfs", "f2fs", "jfs", "reiserfs", "hfs", "hfsplus",
+ "ntfs", "vfat", "fat", "msdos", "exfat", "udf", "iso9660":
+ return true
+ }
+ // Don't act as a general fallback - only handle known filesystem types
+ return false
+}
+
+// Priority returns 0 - VFS is the lowest priority fallback plugin.
+func (p *vfsPlugin) Priority() int {
+ return 0
+}
+
+// GetStats returns filesystem statistics using the statfs syscall.
+func (p *vfsPlugin) GetStats(device string, partition fs.PartitionInfo) (*fs.FsStats, error) {
+ if !utils.FileExists(partition.Mountpoint) {
+ klog.V(4).Infof("VFS: mountpoint does not exist: %v", partition.Mountpoint)
+ return nil, nil
+ }
+
+ capacity, free, avail, inodes, inodesFree, err := GetVfsStats(partition.Mountpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return &fs.FsStats{
+ Capacity: capacity,
+ Free: free,
+ Available: avail,
+ Inodes: &inodes,
+ InodesFree: &inodesFree,
+ Type: fs.VFS,
+ }, nil
+}
+
+// ProcessMount handles standard mount processing.
+// For VFS, no special processing is needed.
+func (p *vfsPlugin) ProcessMount(mnt *mount.Info) (bool, *mount.Info, error) {
+ return true, mnt, nil
+}
--- /dev/null
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package vfs
+
+import (
+ "context"
+ "syscall"
+ "time"
+)
+
+// GetVfsStats returns filesystem statistics using the statfs syscall.
+// It has a timeout to prevent hanging on unresponsive filesystems.
+func GetVfsStats(path string) (total uint64, free uint64, avail uint64, inodes uint64, inodesFree uint64, err error) {
+ // timeout the context with, default is 2sec
+ timeout := 2
+ ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeout)*time.Second)
+ defer cancel()
+
+ type result struct {
+ total uint64
+ free uint64
+ avail uint64
+ inodes uint64
+ inodesFree uint64
+ err error
+ }
+
+ resultChan := make(chan result, 1)
+
+ go func() {
+ var s syscall.Statfs_t
+ if err = syscall.Statfs(path, &s); err != nil {
+ total, free, avail, inodes, inodesFree = 0, 0, 0, 0, 0
+ }
+ total = uint64(s.Frsize) * s.Blocks
+ free = uint64(s.Frsize) * s.Bfree
+ avail = uint64(s.Frsize) * s.Bavail
+ inodes = uint64(s.Files)
+ inodesFree = uint64(s.Ffree)
+ resultChan <- result{total: total, free: free, avail: avail, inodes: inodes, inodesFree: inodesFree, err: err}
+ }()
+
+ select {
+ case <-ctx.Done():
+ return 0, 0, 0, 0, 0, ctx.Err()
+ case res := <-resultChan:
+ return res.total, res.free, res.avail, res.inodes, res.inodesFree, res.err
+ }
+}
// Total time duration for which tasks in the cgroup have been throttled.
// Unit: nanoseconds.
ThrottledTime uint64 `json:"throttled_time"`
+
+ // Total number of periods when CPU burst occurs.
+ BurstsPeriods uint64 `json:"bursts_periods"`
+
+ // Total time duration when CPU burst occurs.
+ // Unit: nanoseconds.
+ BurstTime uint64 `json:"burst_time"`
}
// Cpu Aggregated scheduler statistics
IoWaitTime []PerDiskStats `json:"io_wait_time,omitempty"`
IoMerged []PerDiskStats `json:"io_merged,omitempty"`
IoTime []PerDiskStats `json:"io_time,omitempty"`
+ IoCostUsage []PerDiskStats `json:"io_cost_usage,omitempty"`
+ IoCostWait []PerDiskStats `json:"io_cost_wait,omitempty"`
+ IoCostIndebt []PerDiskStats `json:"io_cost_indebt,omitempty"`
+ IoCostIndelay []PerDiskStats `json:"io_cost_indelay,omitempty"`
PSI PSIStats `json:"psi"`
}
Ulimits []UlimitSpec `json:"ulimits,omitempty"`
}
+type Health struct {
+ // Health status of the container
+ Status string `json:"status"`
+}
+
type ContainerStats struct {
// The time of this stat point.
Timestamp time.Time `json:"timestamp"`
CpuSet CPUSetStats `json:"cpuset,omitempty"`
OOMEvents uint64 `json:"oom_events,omitempty"`
+
+ Health Health `json:"health,omitempty"`
}
func timeEq(t1, t2 time.Time, tolerance time.Duration) bool {
type EventData struct {
// Information about an OOM kill event.
OomKill *OomKillEventData `json:"oom,omitempty"`
+
+ // Information about a container deletion event.
+ ContainerDeletion *ContainerDeletionEventData `json:"container_deletion,omitempty"`
}
// Information related to an OOM kill instance
// The name of the killed process
ProcessName string `json:"process_name"`
}
+
+// Information related to a container deletion event
+type ContainerDeletionEventData struct {
+ // ExitCode is the exit code of the container.
+ // A value of -1 indicates the exit code was not available or not applicable.
+ ExitCode int `json:"exit_code"`
+}
Ninety uint64 `json:"ninety"`
// 95th percentile over the collected sample.
NinetyFive uint64 `json:"ninetyfive"`
+ // Number of samples used to calculate these percentiles.
+ Count uint64 `json:"count"`
}
type Usage struct {
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package machine
import (
- "bytes"
"flag"
"os"
"path/filepath"
func KernelVersion() string {
uname := &unix.Utsname{}
-
if err := unix.Uname(uname); err != nil {
return "Unknown"
}
-
- return string(uname.Release[:bytes.IndexByte(uname.Release[:], 0)])
+ return unix.ByteSliceToString(uname.Release[:])
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// The machine package contains functions that extract machine-level specs.
package machine
klog.Errorf("Cannot get machine architecture, err: %v", err)
return ""
}
- return string(uname.Machine[:])
+ return unix.ByteSliceToString(uname.Machine[:])
}
// arm32 changes
// limitations under the License.
//go:build freebsd || darwin || linux
-// +build freebsd darwin linux
package machine
import (
"fmt"
"os"
- "os/exec"
"regexp"
"runtime"
"strings"
+
+ "golang.org/x/sys/unix"
)
var rex = regexp.MustCompile("(PRETTY_NAME)=(.*)")
// getOperatingSystem gets the name of the current operating system.
func getOperatingSystem() (string, error) {
if runtime.GOOS == "darwin" || runtime.GOOS == "freebsd" {
- cmd := exec.Command("uname", "-s")
- osName, err := cmd.Output()
+ uname := unix.Utsname{}
+ err := unix.Uname(&uname)
if err != nil {
return "", err
}
- return string(osName), nil
+ return unix.ByteSliceToString(uname.Sysname[:]), nil
}
bytes, err := os.ReadFile("/etc/os-release")
if err != nil && os.IsNotExist(err) {
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package manager
import (
Spec info.ContainerSpec
}
+// atomicTime is a lock-free wrapper for storing and retrieving time values.
+// It stores time as Unix nanoseconds in an atomic.Int64, enabling concurrent
+// reads and writes without mutex contention.
+type atomicTime struct {
+ atomic.Int64
+}
+
+// Time returns the stored time value as a time.Time.
+func (t *atomicTime) Time() time.Time {
+ return time.Unix(0, t.Load())
+}
+
type containerData struct {
oomEvents uint64
handler container.ContainerHandler
housekeepingInterval time.Duration
maxHousekeepingInterval time.Duration
allowDynamicHousekeeping bool
- infoLastUpdatedTime time.Time
- statsLastUpdatedTime time.Time
+ infoLastUpdatedTime atomicTime // Unix nano
+ statsLastUpdatedTime atomicTime // Unix nano
lastErrorTime time.Time
// used to track time
clock clock.Clock
logUsage bool
// Tells the container to stop.
- stop chan struct{}
+ stop chan struct{}
+ stopOnce sync.Once
// Tells the container to immediately collect stats
onDemandChan chan chan struct{}
if err != nil {
return err
}
- close(cd.stop)
+ // Use sync.Once to ensure the channel is only closed once, preventing
+ // panic from concurrent calls to Stop() when multiple goroutines try
+ // to destroy the same container simultaneously.
+ cd.stopOnce.Do(func() {
+ close(cd.stop)
+ })
cd.perfCollector.Destroy()
cd.resctrlCollector.Destroy()
return nil
// periodic housekeeping to reset. This should be used sparingly, as calling OnDemandHousekeeping frequently
// can have serious performance costs.
func (cd *containerData) OnDemandHousekeeping(maxAge time.Duration) {
- cd.lock.Lock()
- timeSinceStatsLastUpdate := cd.clock.Since(cd.statsLastUpdatedTime)
- cd.lock.Unlock()
+ timeSinceStatsLastUpdate := cd.clock.Since(cd.statsLastUpdatedTime.Time())
if timeSinceStatsLastUpdate > maxAge {
housekeepingFinishedChan := make(chan struct{})
cd.onDemandChan <- housekeepingFinishedChan
func (cd *containerData) GetInfo(shouldUpdateSubcontainers bool) (*containerInfo, error) {
// Get spec and subcontainers.
- if cd.clock.Since(cd.infoLastUpdatedTime) > 5*time.Second || shouldUpdateSubcontainers {
+ if cd.clock.Since(cd.infoLastUpdatedTime.Time()) > 5*time.Second || shouldUpdateSubcontainers {
err := cd.updateSpec()
if err != nil {
return nil, err
return nil, err
}
}
- cd.infoLastUpdatedTime = cd.clock.Now()
+ cd.infoLastUpdatedTime.Store(cd.clock.Now().UnixNano())
}
cd.lock.Lock()
defer cd.lock.Unlock()
klog.V(3).Infof("[%s] Housekeeping took %s", cd.info.Name, duration)
}
cd.notifyOnDemand()
- cd.lock.Lock()
- defer cd.lock.Unlock()
- cd.statsLastUpdatedTime = cd.clock.Now()
+ cd.statsLastUpdatedTime.Store(cd.clock.Now().UnixNano())
return true
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
// Manager of cAdvisor-monitored containers.
package manager
eventsChannel := make(chan watcher.ContainerEvent, 16)
newManager := &manager{
- containers: make(map[namespacedContainerName]*containerData),
quitChannels: make([]chan error, 0, 2),
memoryCache: memoryCache,
fsInfo: fsInfo,
Name string
}
+// containerMap is a type-safe wrapper around sync.Map for storing containerData
+// keyed by namespacedContainerName.
+type containerMap struct {
+ m sync.Map
+}
+
+// Load returns the containerData for the given name, or nil if not found.
+func (c *containerMap) Load(name namespacedContainerName) (*containerData, bool) {
+ v, ok := c.m.Load(name)
+ if !ok {
+ return nil, false
+ }
+ return v.(*containerData), true
+}
+
+// Store stores the containerData for the given name.
+func (c *containerMap) Store(name namespacedContainerName, data *containerData) {
+ c.m.Store(name, data)
+}
+
+// Delete removes the containerData for the given name.
+func (c *containerMap) Delete(name namespacedContainerName) {
+ c.m.Delete(name)
+}
+
+// Range calls f for each container in the map. If f returns false, iteration stops.
+func (c *containerMap) Range(f func(name namespacedContainerName, data *containerData) bool) {
+ c.m.Range(func(key, value any) bool {
+ return f(key.(namespacedContainerName), value.(*containerData))
+ })
+}
+
type manager struct {
- containers map[namespacedContainerName]*containerData
- containersLock sync.RWMutex
+ containers containerMap
memoryCache *memory.InMemoryCache
fsInfo fs.FsInfo
sysFs sysfs.SysFs
}
func (m *manager) destroyCollectors() {
- for _, container := range m.containers {
+ m.containers.Range(func(_ namespacedContainerName, container *containerData) bool {
+ if container == nil {
+ return true
+ }
container.perfCollector.Destroy()
container.resctrlCollector.Destroy()
- }
+ return true
+ })
}
func (m *manager) updateMachineInfo(quit chan error) {
}
func (m *manager) getContainerData(containerName string) (*containerData, error) {
- var cont *containerData
- var ok bool
- func() {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
-
- // Ensure we have the container.
- cont, ok = m.containers[namespacedContainerName{
- Name: containerName,
- }]
- }()
+ // Ensure we have the container.
+ cont, ok := m.containers.Load(namespacedContainerName{Name: containerName})
if !ok {
return nil, fmt.Errorf("unknown container %q", containerName)
}
}
func (m *manager) getContainer(containerName string) (*containerData, error) {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
- cont, ok := m.containers[namespacedContainerName{Name: containerName}]
+ cont, ok := m.containers.Load(namespacedContainerName{Name: containerName})
if !ok {
return nil, fmt.Errorf("unknown container %q", containerName)
}
}
func (m *manager) getSubcontainers(containerName string) map[string]*containerData {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
- containersMap := make(map[string]*containerData, len(m.containers))
+ matchedName := path.Join(containerName, "/")
+ containersMap := make(map[string]*containerData)
// Get all the unique subcontainers of the specified container
- matchedName := path.Join(containerName, "/")
- for i := range m.containers {
- if m.containers[i] == nil {
- continue
+ m.containers.Range(func(_ namespacedContainerName, cont *containerData) bool {
+ if cont == nil {
+ return true
}
- name := m.containers[i].info.Name
+ name := cont.info.Name
if name == containerName || strings.HasPrefix(name, matchedName) {
- containersMap[m.containers[i].info.Name] = m.containers[i]
+ containersMap[name] = cont
}
- }
+ return true
+ })
return containersMap
}
}
func (m *manager) getAllNamespacedContainers(ns string) map[string]*containerData {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
- containers := make(map[string]*containerData, len(m.containers))
+ containers := make(map[string]*containerData)
// Get containers in a namespace.
- for name, cont := range m.containers {
+ m.containers.Range(func(name namespacedContainerName, cont *containerData) bool {
+ if cont == nil {
+ return true
+ }
if name.Namespace == ns {
containers[cont.info.Name] = cont
}
- }
+ return true
+ })
return containers
}
}
func (m *manager) namespacedContainer(containerName string, ns string) (*containerData, error) {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
-
// Check for the container in the namespace.
- cont, ok := m.containers[namespacedContainerName{
- Namespace: ns,
- Name: containerName,
- }]
+ if cont, ok := m.containers.Load(namespacedContainerName{Namespace: ns, Name: containerName}); ok {
+ return cont, nil
+ }
// Look for container by short prefix name if no exact match found.
- if !ok {
- for contName, c := range m.containers {
- if contName.Namespace == ns && strings.HasPrefix(contName.Name, containerName) {
- if cont == nil {
- cont = c
- } else {
- return nil, fmt.Errorf("unable to find container in %q namespace. Container %q is not unique", ns, containerName)
- }
+ var cont *containerData
+ var err error
+ m.containers.Range(func(name namespacedContainerName, c *containerData) bool {
+ if name.Namespace == ns && strings.HasPrefix(name.Name, containerName) {
+ if cont == nil {
+ cont = c
+ } else {
+ err = fmt.Errorf("unable to find container in %q namespace. Container %q is not unique", ns, containerName)
+ return false // stop iteration
}
}
+ return true
+ })
- if cont == nil {
- return nil, fmt.Errorf("unable to find container %q in %q namespace", containerName, ns)
- }
+ if err != nil {
+ return nil, err
+ }
+
+ if cont == nil {
+ return nil, fmt.Errorf("unable to find container %q in %q namespace", containerName, ns)
}
return cont, nil
}
func (m *manager) Exists(containerName string) bool {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
-
- namespacedName := namespacedContainerName{
- Name: containerName,
- }
-
- _, ok := m.containers[namespacedName]
+ _, ok := m.containers.Load(namespacedContainerName{Name: containerName})
return ok
}
return nil, err
}
if len(conts) != 1 {
- return nil, fmt.Errorf("Expected the request to match only one container")
+ return nil, fmt.Errorf("expected the request to match only one container")
}
// TODO(rjnagal): handle count? Only if we can do count by type (eg. top 5 cpu users)
ps := []v2.ProcessInfo{}
// Create a container.
func (m *manager) createContainer(containerName string, watchSource watcher.ContainerWatchSource) error {
- m.containersLock.Lock()
- defer m.containersLock.Unlock()
-
- return m.createContainerLocked(containerName, watchSource)
-}
-
-func (m *manager) createContainerLocked(containerName string, watchSource watcher.ContainerWatchSource) error {
namespacedName := namespacedContainerName{
Name: containerName,
}
// Check that the container didn't already exist.
- if _, ok := m.containers[namespacedName]; ok {
+ if _, ok := m.containers.Load(namespacedName); ok {
return nil
}
}
// Add the container name and all its aliases. The aliases must be within the namespace of the factory.
- m.containers[namespacedName] = cont
+ m.containers.Store(namespacedName, cont)
for _, alias := range cont.info.Aliases {
- m.containers[namespacedContainerName{
+ m.containers.Store(namespacedContainerName{
Namespace: cont.info.Namespace,
Name: alias,
- }] = cont
+ }, cont)
}
klog.V(3).Infof("Added container: %q (aliases: %v, namespace: %q)", containerName, cont.info.Aliases, cont.info.Namespace)
}
func (m *manager) destroyContainer(containerName string) error {
- m.containersLock.Lock()
- defer m.containersLock.Unlock()
-
- return m.destroyContainerLocked(containerName)
-}
-
-func (m *manager) destroyContainerLocked(containerName string) error {
namespacedName := namespacedContainerName{
Name: containerName,
}
- cont, ok := m.containers[namespacedName]
+ cont, ok := m.containers.Load(namespacedName)
if !ok {
// Already destroyed, done.
return nil
}
- // Tell the container to stop.
- err := cont.Stop()
+ exitCode, err := cont.handler.GetExitCode()
+ if err != nil {
+ klog.V(4).Infof("Could not retrieve exit code for container %q: %v (using -1)", containerName, err)
+ exitCode = -1
+ }
+
+ err = cont.Stop()
if err != nil {
return err
}
// Remove the container from our records (and all its aliases).
- delete(m.containers, namespacedName)
+ m.containers.Delete(namespacedName)
for _, alias := range cont.info.Aliases {
- delete(m.containers, namespacedContainerName{
+ m.containers.Delete(namespacedContainerName{
Namespace: cont.info.Namespace,
Name: alias,
})
}
- klog.V(3).Infof("Destroyed container: %q (aliases: %v, namespace: %q)", containerName, cont.info.Aliases, cont.info.Namespace)
+ klog.V(3).Infof("Destroyed container: %q (aliases: %v, namespace: %q, exit_code: %d)", containerName, cont.info.Aliases, cont.info.Namespace, exitCode)
contRef, err := cont.handler.ContainerReference()
if err != nil {
ContainerName: contRef.Name,
Timestamp: time.Now(),
EventType: info.EventContainerDeletion,
+ EventData: info.EventData{
+ ContainerDeletion: &info.ContainerDeletionEventData{
+ ExitCode: exitCode,
+ },
+ },
}
err = m.eventHandler.AddEvent(newEvent)
if err != nil {
// Detect all containers that have been added or deleted from the specified container.
func (m *manager) getContainersDiff(containerName string) (added []info.ContainerReference, removed []info.ContainerReference, err error) {
// Get all subcontainers recursively.
- m.containersLock.RLock()
- cont, ok := m.containers[namespacedContainerName{
- Name: containerName,
- }]
- m.containersLock.RUnlock()
+ cont, ok := m.containers.Load(namespacedContainerName{Name: containerName})
if !ok {
return nil, nil, fmt.Errorf("failed to find container %q while checking for new containers", containerName)
}
}
allContainers = append(allContainers, info.ContainerReference{Name: containerName})
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
-
// Determine which were added and which were removed.
allContainersSet := make(map[string]*containerData)
- for name, d := range m.containers {
+ m.containers.Range(func(name namespacedContainerName, cont *containerData) bool {
+ if cont == nil {
+ return true
+ }
// Only add the canonical name.
- if d.info.Name == name.Name {
- allContainersSet[name.Name] = d
+ if cont.info.Name == name.Name {
+ allContainersSet[name.Name] = cont
}
- }
+ return true
+ })
// Added containers
for _, c := range allContainers {
delete(allContainersSet, c.Name)
- _, ok := m.containers[namespacedContainerName{
- Name: c.Name,
- }]
+ _, ok := m.containers.Load(namespacedContainerName{Name: c.Name})
if !ok {
added = append(added, c)
}
debugInfo := container.DebugInfo()
// Get unique containers.
- var conts map[*containerData]struct{}
- func() {
- m.containersLock.RLock()
- defer m.containersLock.RUnlock()
-
- conts = make(map[*containerData]struct{}, len(m.containers))
- for _, c := range m.containers {
- conts[c] = struct{}{}
+ conts := make(map[*containerData]struct{})
+ m.containers.Range(func(_ namespacedContainerName, cont *containerData) bool {
+ if cont != nil {
+ conts[cont] = struct{}{}
}
- }()
+ return true
+ })
// List containers.
lines := make([]string, 0, len(conts))
}}
},
},
+ {
+ name: "container_health_state",
+ help: "The result of the container's health check",
+ valueType: prometheus.GaugeValue,
+ getValues: getContainerHealthState,
+ },
},
includedMetrics: includedMetrics,
opts: opts,
timestamp: s.Timestamp,
}}
},
+ }, {
+ name: "container_cpu_cfs_burst_periods_total",
+ help: "Number of periods when burst occurs.",
+ valueType: prometheus.CounterValue,
+ condition: func(s info.ContainerSpec) bool { return s.Cpu.Quota != 0 },
+ getValues: func(s *info.ContainerStats) metricValues {
+ return metricValues{
+ {
+ value: float64(s.Cpu.CFS.BurstsPeriods),
+ timestamp: s.Timestamp,
+ }}
+ },
+ }, {
+ name: "container_cpu_cfs_burst_seconds_total",
+ help: "Total time duration the container has been bursted.",
+ valueType: prometheus.CounterValue,
+ condition: func(s info.ContainerSpec) bool { return s.Cpu.Quota != 0 },
+ getValues: func(s *info.ContainerStats) metricValues {
+ return metricValues{
+ {
+ value: float64(s.Cpu.CFS.BurstTime) / float64(time.Second),
+ timestamp: s.Timestamp,
+ }}
+ },
},
}...)
}
return float64(fs.WeightedIoTime) / float64(time.Second)
}, s.Timestamp)
},
+ }, {
+ name: "container_fs_io_cost_usage_seconds_total",
+ help: "Cumulative IOCost usage in seconds",
+ valueType: prometheus.CounterValue,
+ extraLabels: []string{"device"},
+ getValues: func(s *info.ContainerStats) metricValues {
+ return ioValues(
+ s.DiskIo.IoCostUsage, "Count", asMicrosecondsToSeconds,
+ []info.FsStats{}, nil,
+ s.Timestamp,
+ )
+ },
+ }, {
+ name: "container_fs_io_cost_wait_seconds_total",
+ help: "Cumulative IOCost wait in seconds",
+ valueType: prometheus.CounterValue,
+ extraLabels: []string{"device"},
+ getValues: func(s *info.ContainerStats) metricValues {
+ return ioValues(
+ s.DiskIo.IoCostWait, "Count", asMicrosecondsToSeconds,
+ []info.FsStats{}, nil,
+ s.Timestamp,
+ )
+ },
+ }, {
+ name: "container_fs_io_cost_indebt_seconds_total",
+ help: "Cumulative IOCost debt in seconds",
+ valueType: prometheus.CounterValue,
+ extraLabels: []string{"device"},
+ getValues: func(s *info.ContainerStats) metricValues {
+ return ioValues(
+ s.DiskIo.IoCostIndebt, "Count", asMicrosecondsToSeconds,
+ []info.FsStats{}, nil,
+ s.Timestamp,
+ )
+ },
+ }, {
+ name: "container_fs_io_cost_indelay_seconds_total",
+ help: "Cumulative IOCost delay in seconds",
+ valueType: prometheus.CounterValue,
+ extraLabels: []string{"device"},
+ getValues: func(s *info.ContainerStats) metricValues {
+ return ioValues(
+ s.DiskIo.IoCostIndelay, "Count", asMicrosecondsToSeconds,
+ []info.FsStats{}, nil,
+ s.Timestamp,
+ )
+ },
},
{
name: "container_blkio_device_usage_total",
}
return values
}
+
+func getContainerHealthState(s *info.ContainerStats) metricValues {
+ value := float64(0)
+ switch s.Health.Status {
+ case "healthy":
+ value = 1
+ case "": // if container has no health check defined
+ value = -1
+ default: // starting or unhealthy
+ }
+ return metricValues{{
+ value: value,
+ timestamp: s.Timestamp,
+ }}
+}
Periods: 723,
ThrottledPeriods: 18,
ThrottledTime: 1724314000,
+ BurstsPeriods: 25,
+ BurstTime: 500000000,
},
Schedstat: info.CpuSchedstat{
RunTime: 53643567,
"Write": 6,
},
}},
+ IoCostUsage: []info.PerDiskStats{{
+ Device: "sda1",
+ Major: 8,
+ Minor: 1,
+ Stats: map[string]uint64{"Count": 1500000},
+ }},
+ IoCostWait: []info.PerDiskStats{{
+ Device: "sda1",
+ Major: 8,
+ Minor: 1,
+ Stats: map[string]uint64{"Count": 2500000},
+ }},
+ IoCostIndebt: []info.PerDiskStats{{
+ Device: "sda1",
+ Major: 8,
+ Minor: 1,
+ Stats: map[string]uint64{"Count": 500000},
+ }},
+ IoCostIndelay: []info.PerDiskStats{{
+ Device: "sda1",
+ Major: 8,
+ Minor: 1,
+ Stats: map[string]uint64{"Count": 750000},
+ }},
PSI: info.PSIStats{
Full: info.PSIData{
Avg10: 0.3,
},
},
CpuSet: info.CPUSetStats{MemoryMigrate: 1},
+ Health: info.Health{Status: "healthy"},
},
},
},
func (p *erroringSubcontainersInfoProvider) GetVersionInfo() (*info.VersionInfo, error) {
if p.shouldFail {
- return nil, errors.New("Oops 1")
+ return nil, errors.New("oops 1")
}
return p.successfulProvider.GetVersionInfo()
}
func (p *erroringSubcontainersInfoProvider) GetMachineInfo() (*info.MachineInfo, error) {
if p.shouldFail {
- return nil, errors.New("Oops 2")
+ return nil, errors.New("oops 2")
}
return p.successfulProvider.GetMachineInfo()
}
func (p *erroringSubcontainersInfoProvider) GetRequestedContainersInfo(
a string, opt v2.RequestOptions) (map[string]*info.ContainerInfo, error) {
if p.shouldFail {
- return map[string]*info.ContainerInfo{}, errors.New("Oops 3")
+ return map[string]*info.ContainerInfo{}, errors.New("oops 3")
}
return p.successfulProvider.GetRequestedContainersInfo(a, opt)
}
//go:build libipmctl && cgo
-// +build libipmctl,cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build !libipmctl || !cgo
-// +build !libipmctl !cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build libpfm && cgo
-// +build libpfm,cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build !libpfm || !cgo
-// +build !libpfm !cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build libpfm && cgo
-// +build libpfm,cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build !libpfm || !cgo
-// +build !libpfm !cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build libpfm && cgo
-// +build libpfm,cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
//go:build libpfm && cgo
-// +build libpfm,cgo
// Copyright 2020 Google Inc. All Rights Reserved.
//
Fifty: val,
Ninety: val,
NinetyFive: val,
+ Count: 1,
}
r.Add(sample)
}
p.Fifty = r.samples.GetPercentile(0.5)
p.Ninety = r.samples.GetPercentile(0.9)
p.NinetyFive = r.samples.GetPercentile(0.95)
+ // len(samples) is equal to count stored in mean.
+ p.Count = r.mean.count
p.Present = true
return p
}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package cpuload
import (
--- /dev/null
+// Copyright 2015 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build !linux
+
+package cpuload
+
+import (
+ "fmt"
+
+ info "github.com/google/cadvisor/info/v1"
+)
+
+type CpuLoadReader interface {
+ // Start the reader.
+ Start() error
+
+ // Stop the reader and clean up internal state.
+ Stop()
+
+ // Retrieve Cpu load for a given group.
+ // name is the full hierarchical name of the container.
+ // Path is an absolute filesystem path for a container under CPU cgroup hierarchy.
+ GetCpuLoad(name string, path string) (info.LoadStats, error)
+}
+
+func New() (CpuLoadReader, error) {
+ return nil, fmt.Errorf("cpuload is not supported on this platform")
+}
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package netlink
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package netlink
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package netlink
import (
// See the License for the specific language governing permissions and
// limitations under the License.
+//go:build linux
+
package oomparser
import (
//go:build !x86
-// +build !x86
// Copyright 2021 Google Inc. All Rights Reserved.
//
//go:build x86
-// +build x86
// Copyright 2021 Google Inc. All Rights Reserved.
//
cpusCount := len(cpusPaths)
if cpusCount == 0 {
- err = fmt.Errorf("Any CPU is not available, cpusPath: %s", cpusPath)
+ err = fmt.Errorf("no CPU is available, cpusPath: %s", cpusPath)
return nil, 0, err
}
+++ /dev/null
-# Binaries for programs and plugins
-*.exe
-*.dll
-*.so
-*.dylib
-
-# Test binary, build with `go test -c`
-*.test
-
-# Output of the go coverage tool, specifically when used with LiteIDE
-*.out
-
-# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
-.glide/
-
-examples/remove-empty-directories/remove-empty-directories
-examples/sizes/sizes
-examples/walk-fast/walk-fast
-examples/walk-stdlib/walk-stdlib
+++ /dev/null
-BSD 2-Clause License
-
-Copyright (c) 2017, Karrick McDermott
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this
- list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright notice,
- this list of conditions and the following disclaimer in the documentation
- and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+++ /dev/null
-# godirwalk
-
-`godirwalk` is a library for traversing a directory tree on a file
-system.
-
-[](https://godoc.org/github.com/karrick/godirwalk) [](https://dev.azure.com/microsoft0235/microsoft/_build/latest?definitionId=1&branchName=master)
-
-In short, why did I create this library?
-
-1. It's faster than `filepath.Walk`.
-1. It's more correct on Windows than `filepath.Walk`.
-1. It's more easy to use than `filepath.Walk`.
-1. It's more flexible than `filepath.Walk`.
-
-Depending on your specific circumstances, [you might no longer need a
-library for file walking in
-Go](https://engineering.kablamo.com.au/posts/2021/quick-comparison-between-go-file-walk-implementations).
-
-## Usage Example
-
-Additional examples are provided in the `examples/` subdirectory.
-
-This library will normalize the provided top level directory name
-based on the os-specific path separator by calling `filepath.Clean` on
-its first argument. However it always provides the pathname created by
-using the correct os-specific path separator when invoking the
-provided callback function.
-
-```Go
- dirname := "some/directory/root"
- err := godirwalk.Walk(dirname, &godirwalk.Options{
- Callback: func(osPathname string, de *godirwalk.Dirent) error {
- // Following string operation is not most performant way
- // of doing this, but common enough to warrant a simple
- // example here:
- if strings.Contains(osPathname, ".git") {
- return godirwalk.SkipThis
- }
- fmt.Printf("%s %s\n", de.ModeType(), osPathname)
- return nil
- },
- Unsorted: true, // (optional) set true for faster yet non-deterministic enumeration (see godoc)
- })
-```
-
-This library not only provides functions for traversing a file system
-directory tree, but also for obtaining a list of immediate descendants
-of a particular directory, typically much more quickly than using
-`os.ReadDir` or `os.ReadDirnames`.
-
-## Description
-
-Here's why I use `godirwalk` in preference to `filepath.Walk`,
-`os.ReadDir`, and `os.ReadDirnames`.
-
-### It's faster than `filepath.Walk`
-
-When compared against `filepath.Walk` in benchmarks, it has been
-observed to run between five and ten times the speed on darwin, at
-speeds comparable to the that of the unix `find` utility; and about
-twice the speed on linux; and about four times the speed on Windows.
-
-How does it obtain this performance boost? It does less work to give
-you nearly the same output. This library calls the same `syscall`
-functions to do the work, but it makes fewer calls, does not throw
-away information that it might need, and creates less memory churn
-along the way by reusing the same scratch buffer for reading from a
-directory rather than reallocating a new buffer every time it reads
-file system entry data from the operating system.
-
-While traversing a file system directory tree, `filepath.Walk` obtains
-the list of immediate descendants of a directory, and throws away the
-node type information for the file system entry that is provided by
-the operating system that comes with the node's name. Then,
-immediately prior to invoking the callback function, `filepath.Walk`
-invokes `os.Stat` for each node, and passes the returned `os.FileInfo`
-information to the callback.
-
-While the `os.FileInfo` information provided by `os.Stat` is extremely
-helpful--and even includes the `os.FileMode` data--providing it
-requires an additional system call for each node.
-
-Because most callbacks only care about what the node type is, this
-library does not throw the type information away, but rather provides
-that information to the callback function in the form of a
-`os.FileMode` value. Note that the provided `os.FileMode` value that
-this library provides only has the node type information, and does not
-have the permission bits, sticky bits, or other information from the
-file's mode. If the callback does care about a particular node's
-entire `os.FileInfo` data structure, the callback can easiy invoke
-`os.Stat` when needed, and only when needed.
-
-#### Benchmarks
-
-##### macOS
-
-```Bash
-$ go test -bench=. -benchmem
-goos: darwin
-goarch: amd64
-pkg: github.com/karrick/godirwalk
-BenchmarkReadDirnamesStandardLibrary-12 50000 26250 ns/op 10360 B/op 16 allocs/op
-BenchmarkReadDirnamesThisLibrary-12 50000 24372 ns/op 5064 B/op 20 allocs/op
-BenchmarkFilepathWalk-12 1 1099524875 ns/op 228415912 B/op 416952 allocs/op
-BenchmarkGodirwalk-12 2 526754589 ns/op 103110464 B/op 451442 allocs/op
-BenchmarkGodirwalkUnsorted-12 3 509219296 ns/op 100751400 B/op 378800 allocs/op
-BenchmarkFlameGraphFilepathWalk-12 1 7478618820 ns/op 2284138176 B/op 4169453 allocs/op
-BenchmarkFlameGraphGodirwalk-12 1 4977264058 ns/op 1031105328 B/op 4514423 allocs/op
-PASS
-ok github.com/karrick/godirwalk 21.219s
-```
-
-##### Linux
-
-```Bash
-$ go test -bench=. -benchmem
-goos: linux
-goarch: amd64
-pkg: github.com/karrick/godirwalk
-BenchmarkReadDirnamesStandardLibrary-12 100000 15458 ns/op 10360 B/op 16 allocs/op
-BenchmarkReadDirnamesThisLibrary-12 100000 14646 ns/op 5064 B/op 20 allocs/op
-BenchmarkFilepathWalk-12 2 631034745 ns/op 228210216 B/op 416939 allocs/op
-BenchmarkGodirwalk-12 3 358714883 ns/op 102988664 B/op 451437 allocs/op
-BenchmarkGodirwalkUnsorted-12 3 355363915 ns/op 100629234 B/op 378796 allocs/op
-BenchmarkFlameGraphFilepathWalk-12 1 6086913991 ns/op 2282104720 B/op 4169417 allocs/op
-BenchmarkFlameGraphGodirwalk-12 1 3456398824 ns/op 1029886400 B/op 4514373 allocs/op
-PASS
-ok github.com/karrick/godirwalk 19.179s
-```
-
-### It's more correct on Windows than `filepath.Walk`
-
-I did not previously care about this either, but humor me. We all love
-how we can write once and run everywhere. It is essential for the
-language's adoption, growth, and success, that the software we create
-can run unmodified on all architectures and operating systems
-supported by Go.
-
-When the traversed file system has a logical loop caused by symbolic
-links to directories, on unix `filepath.Walk` ignores symbolic links
-and traverses the entire directory tree without error. On Windows
-however, `filepath.Walk` will continue following directory symbolic
-links, even though it is not supposed to, eventually causing
-`filepath.Walk` to terminate early and return an error when the
-pathname gets too long from concatenating endless loops of symbolic
-links onto the pathname. This error comes from Windows, passes through
-`filepath.Walk`, and to the upstream client running `filepath.Walk`.
-
-The takeaway is that behavior is different based on which platform
-`filepath.Walk` is running. While this is clearly not intentional,
-until it is fixed in the standard library, it presents a compatibility
-problem.
-
-This library fixes the above problem such that it will never follow
-logical file sytem loops on either unix or Windows. Furthermore, it
-will only follow symbolic links when `FollowSymbolicLinks` is set to
-true. Behavior on Windows and other operating systems is identical.
-
-### It's more easy to use than `filepath.Walk`
-
-While this library strives to mimic the behavior of the incredibly
-well-written `filepath.Walk` standard library, there are places where
-it deviates a bit in order to provide a more easy or intuitive caller
-interface.
-
-#### Callback interface does not send you an error to check
-
-Since this library does not invoke `os.Stat` on every file system node
-it encounters, there is no possible error event for the callback
-function to filter on. The third argument in the `filepath.WalkFunc`
-function signature to pass the error from `os.Stat` to the callback
-function is no longer necessary, and thus eliminated from signature of
-the callback function from this library.
-
-Furthermore, this slight interface difference between
-`filepath.WalkFunc` and this library's `WalkFunc` eliminates the
-boilerplate code that callback handlers must write when they use
-`filepath.Walk`. Rather than every callback function needing to check
-the error value passed into it and branch accordingly, users of this
-library do not even have an error value to check immediately upon
-entry into the callback function. This is an improvement both in
-runtime performance and code clarity.
-
-#### Callback function is invoked with OS specific file system path separator
-
-On every OS platform `filepath.Walk` invokes the callback function
-with a solidus (`/`) delimited pathname. By contrast this library
-invokes the callback with the os-specific pathname separator,
-obviating a call to `filepath.Clean` in the callback function for each
-node prior to actually using the provided pathname.
-
-In other words, even on Windows, `filepath.Walk` will invoke the
-callback with `some/path/to/foo.txt`, requiring well written clients
-to perform pathname normalization for every file prior to working with
-the specified file. This is a hidden boilerplate requirement to create
-truly os agnostic callback functions. In truth, many clients developed
-on unix and not tested on Windows neglect this subtlety, and will
-result in software bugs when someone tries to run that software on
-Windows.
-
-This library invokes the callback function with `some\path\to\foo.txt`
-for the same file when running on Windows, eliminating the need to
-normalize the pathname by the client, and lessen the likelyhood that a
-client will work on unix but not on Windows.
-
-This enhancement eliminates necessity for some more boilerplate code
-in callback functions while improving the runtime performance of this
-library.
-
-#### `godirwalk.SkipThis` is more intuitive to use than `filepath.SkipDir`
-
-One arguably confusing aspect of the `filepath.WalkFunc` interface
-that this library must emulate is how a caller tells the `Walk`
-function to skip file system entries. With both `filepath.Walk` and
-this library's `Walk`, when a callback function wants to skip a
-directory and not descend into its children, it returns
-`filepath.SkipDir`. If the callback function returns
-`filepath.SkipDir` for a non-directory, `filepath.Walk` and this
-library will stop processing any more entries in the current
-directory. This is not necessarily what most developers want or
-expect. If you want to simply skip a particular non-directory entry
-but continue processing entries in the directory, the callback
-function must return nil.
-
-The implications of this interface design is when you want to walk a
-file system hierarchy and skip an entry, you have to return a
-different value based on what type of file system entry that node
-is. To skip an entry, if the entry is a directory, you must return
-`filepath.SkipDir`, and if entry is not a directory, you must return
-`nil`. This is an unfortunate hurdle I have observed many developers
-struggling with, simply because it is not an intuitive interface.
-
-Here is an example callback function that adheres to
-`filepath.WalkFunc` interface to have it skip any file system entry
-whose full pathname includes a particular substring, `optSkip`. Note
-that this library still supports identical behavior of `filepath.Walk`
-when the callback function returns `filepath.SkipDir`.
-
-```Go
- func callback1(osPathname string, de *godirwalk.Dirent) error {
- if optSkip != "" && strings.Contains(osPathname, optSkip) {
- if b, err := de.IsDirOrSymlinkToDir(); b == true && err == nil {
- return filepath.SkipDir
- }
- return nil
- }
- // Process file like normal...
- return nil
- }
-```
-
-This library attempts to eliminate some of that logic boilerplate
-required in callback functions by providing a new token error value,
-`SkipThis`, which a callback function may return to skip the current
-file system entry regardless of what type of entry it is. If the
-current entry is a directory, its children will not be enumerated,
-exactly as if the callback had returned `filepath.SkipDir`. If the
-current entry is a non-directory, the next file system entry in the
-current directory will be enumerated, exactly as if the callback
-returned `nil`. The following example callback function has identical
-behavior as the previous, but has less boilerplate, and admittedly
-logic that I find more simple to follow.
-
-```Go
- func callback2(osPathname string, de *godirwalk.Dirent) error {
- if optSkip != "" && strings.Contains(osPathname, optSkip) {
- return godirwalk.SkipThis
- }
- // Process file like normal...
- return nil
- }
-```
-
-### It's more flexible than `filepath.Walk`
-
-#### Configurable Handling of Symbolic Links
-
-The default behavior of this library is to ignore symbolic links to
-directories when walking a directory tree, just like `filepath.Walk`
-does. However, it does invoke the callback function with each node it
-finds, including symbolic links. If a particular use case exists to
-follow symbolic links when traversing a directory tree, this library
-can be invoked in manner to do so, by setting the
-`FollowSymbolicLinks` config parameter to `true`.
-
-#### Configurable Sorting of Directory Children
-
-The default behavior of this library is to always sort the immediate
-descendants of a directory prior to visiting each node, just like
-`filepath.Walk` does. This is usually the desired behavior. However,
-this does come at slight performance and memory penalties required to
-sort the names when a directory node has many entries. Additionally if
-caller specifies `Unsorted` enumeration in the configuration
-parameter, reading directories is lazily performed as the caller
-consumes entries. If a particular use case exists that does not
-require sorting the directory's immediate descendants prior to
-visiting its nodes, this library will skip the sorting step when the
-`Unsorted` parameter is set to `true`.
-
-Here's an interesting read of the potential hazzards of traversing a
-file system hierarchy in a non-deterministic order. If you know the
-problem you are solving is not affected by the order files are
-visited, then I encourage you to use `Unsorted`. Otherwise skip
-setting this option.
-
-[Researchers find bug in Python script may have affected hundreds of studies](https://arstechnica.com/information-technology/2019/10/chemists-discover-cross-platform-python-scripts-not-so-cross-platform/)
-
-#### Configurable Post Children Callback
-
-This library provides upstream code with the ability to specify a
-callback function to be invoked for each directory after its children
-are processed. This has been used to recursively delete empty
-directories after traversing the file system in a more efficient
-manner. See the `examples/clean-empties` directory for an example of
-this usage.
-
-#### Configurable Error Callback
-
-This library provides upstream code with the ability to specify a
-callback to be invoked for errors that the operating system returns,
-allowing the upstream code to determine the next course of action to
-take, whether to halt walking the hierarchy, as it would do were no
-error callback provided, or skip the node that caused the error. See
-the `examples/walk-fast` directory for an example of this usage.
+++ /dev/null
-# Go
-# Build your Go project.
-# Add steps that test, save build artifacts, deploy, and more:
-# https://docs.microsoft.com/azure/devops/pipelines/languages/go
-
-trigger:
-- master
-
-variables:
- GOVERSION: 1.13
-
-jobs:
- - job: Linux
- pool:
- vmImage: 'ubuntu-latest'
- steps:
- - task: GoTool@0
- displayName: 'Use Go $(GOVERSION)'
- inputs:
- version: $(GOVERSION)
- - task: Go@0
- inputs:
- command: test
- arguments: -race -v ./...
- displayName: 'Execute Tests'
-
- - job: Mac
- pool:
- vmImage: 'macos-latest'
- steps:
- - task: GoTool@0
- displayName: 'Use Go $(GOVERSION)'
- inputs:
- version: $(GOVERSION)
- - task: Go@0
- inputs:
- command: test
- arguments: -race -v ./...
- displayName: 'Execute Tests'
-
- - job: Windows
- pool:
- vmImage: 'windows-latest'
- steps:
- - task: GoTool@0
- displayName: 'Use Go $(GOVERSION)'
- inputs:
- version: $(GOVERSION)
- - task: Go@0
- inputs:
- command: test
- arguments: -race -v ./...
- displayName: 'Execute Tests'
+++ /dev/null
-#!/bin/bash
-
-# for version in v1.9.1 v1.10.0 v1.10.3 v1.10.12 v1.11.2 v1.11.3 v1.12.0 v1.13.1 v1.14.0 v1.14.1 ; do
-for version in v1.10.12 v1.14.1 v1.15.2 ; do
- echo "### $version" > $version.txt
- git checkout -- go.mod && git checkout $version && go test -run=NONE -bench=Benchmark2 >> $version.txt || exit 1
-done
+++ /dev/null
-// +build godirwalk_debug
-
-package godirwalk
-
-import (
- "fmt"
- "os"
-)
-
-// debug formats and prints arguments to stderr for development builds
-func debug(f string, a ...interface{}) {
- // fmt.Fprintf(os.Stderr, f, a...)
- os.Stderr.Write([]byte("godirwalk: " + fmt.Sprintf(f, a...)))
-}
+++ /dev/null
-// +build !godirwalk_debug
-
-package godirwalk
-
-// debug is a no-op for release builds
-func debug(_ string, _ ...interface{}) {}
+++ /dev/null
-package godirwalk
-
-import (
- "os"
- "path/filepath"
-)
-
-// Dirent stores the name and file system mode type of discovered file system
-// entries.
-type Dirent struct {
- name string // base name of the file system entry.
- path string // path name of the file system entry.
- modeType os.FileMode // modeType is the type of file system entry.
-}
-
-// NewDirent returns a newly initialized Dirent structure, or an error. This
-// function does not follow symbolic links.
-//
-// This function is rarely used, as Dirent structures are provided by other
-// functions in this library that read and walk directories, but is provided,
-// however, for the occasion when a program needs to create a Dirent.
-func NewDirent(osPathname string) (*Dirent, error) {
- modeType, err := modeType(osPathname)
- if err != nil {
- return nil, err
- }
- return &Dirent{
- name: filepath.Base(osPathname),
- path: filepath.Dir(osPathname),
- modeType: modeType,
- }, nil
-}
-
-// IsDir returns true if and only if the Dirent represents a file system
-// directory. Note that on some operating systems, more than one file mode bit
-// may be set for a node. For instance, on Windows, a symbolic link that points
-// to a directory will have both the directory and the symbolic link bits set.
-func (de Dirent) IsDir() bool { return de.modeType&os.ModeDir != 0 }
-
-// IsDirOrSymlinkToDir returns true if and only if the Dirent represents a file
-// system directory, or a symbolic link to a directory. Note that if the Dirent
-// is not a directory but is a symbolic link, this method will resolve by
-// sending a request to the operating system to follow the symbolic link.
-func (de Dirent) IsDirOrSymlinkToDir() (bool, error) {
- if de.IsDir() {
- return true, nil
- }
- if !de.IsSymlink() {
- return false, nil
- }
- // Does this symlink point to a directory?
- info, err := os.Stat(filepath.Join(de.path, de.name))
- if err != nil {
- return false, err
- }
- return info.IsDir(), nil
-}
-
-// IsRegular returns true if and only if the Dirent represents a regular file.
-// That is, it ensures that no mode type bits are set.
-func (de Dirent) IsRegular() bool { return de.modeType&os.ModeType == 0 }
-
-// IsSymlink returns true if and only if the Dirent represents a file system
-// symbolic link. Note that on some operating systems, more than one file mode
-// bit may be set for a node. For instance, on Windows, a symbolic link that
-// points to a directory will have both the directory and the symbolic link bits
-// set.
-func (de Dirent) IsSymlink() bool { return de.modeType&os.ModeSymlink != 0 }
-
-// IsDevice returns true if and only if the Dirent represents a device file.
-func (de Dirent) IsDevice() bool { return de.modeType&os.ModeDevice != 0 }
-
-// ModeType returns the mode bits that specify the file system node type. We
-// could make our own enum-like data type for encoding the file type, but Go's
-// runtime already gives us architecture independent file modes, as discussed in
-// `os/types.go`:
-//
-// Go's runtime FileMode type has same definition on all systems, so that
-// information about files can be moved from one system to another portably.
-func (de Dirent) ModeType() os.FileMode { return de.modeType }
-
-// Name returns the base name of the file system entry.
-func (de Dirent) Name() string { return de.name }
-
-// reset releases memory held by entry err and name, and resets mode type to 0.
-func (de *Dirent) reset() {
- de.name = ""
- de.path = ""
- de.modeType = 0
-}
-
-// Dirents represents a slice of Dirent pointers, which are sortable by base
-// name. This type satisfies the `sort.Interface` interface.
-type Dirents []*Dirent
-
-// Len returns the count of Dirent structures in the slice.
-func (l Dirents) Len() int { return len(l) }
-
-// Less returns true if and only if the base name of the element specified by
-// the first index is lexicographically less than that of the second index.
-func (l Dirents) Less(i, j int) bool { return l[i].name < l[j].name }
-
-// Swap exchanges the two Dirent entries specified by the two provided indexes.
-func (l Dirents) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
+++ /dev/null
-/*
-Package godirwalk provides functions to read and traverse directory trees.
-
-In short, why do I use this library?
-
-* It's faster than `filepath.Walk`.
-
-* It's more correct on Windows than `filepath.Walk`.
-
-* It's more easy to use than `filepath.Walk`.
-
-* It's more flexible than `filepath.Walk`.
-
-USAGE
-
-This library will normalize the provided top level directory name based on the
-os-specific path separator by calling `filepath.Clean` on its first
-argument. However it always provides the pathname created by using the correct
-os-specific path separator when invoking the provided callback function.
-
- dirname := "some/directory/root"
- err := godirwalk.Walk(dirname, &godirwalk.Options{
- Callback: func(osPathname string, de *godirwalk.Dirent) error {
- fmt.Printf("%s %s\n", de.ModeType(), osPathname)
- return nil
- },
- })
-
-This library not only provides functions for traversing a file system directory
-tree, but also for obtaining a list of immediate descendants of a particular
-directory, typically much more quickly than using `os.ReadDir` or
-`os.ReadDirnames`.
-
- scratchBuffer := make([]byte, godirwalk.MinimumScratchBufferSize)
-
- names, err := godirwalk.ReadDirnames("some/directory", scratchBuffer)
- // ...
-
- entries, err := godirwalk.ReadDirents("another/directory", scratchBuffer)
- // ...
-*/
-package godirwalk
+++ /dev/null
-// +build dragonfly freebsd openbsd netbsd
-
-package godirwalk
-
-import "syscall"
-
-func inoFromDirent(de *syscall.Dirent) uint64 {
- return uint64(de.Fileno)
-}
+++ /dev/null
-// +build aix darwin linux nacl solaris
-
-package godirwalk
-
-import "syscall"
-
-func inoFromDirent(de *syscall.Dirent) uint64 {
- return uint64(de.Ino)
-}
+++ /dev/null
-package godirwalk
-
-import (
- "os"
-)
-
-// modeType returns the mode type of the file system entry identified by
-// osPathname by calling os.LStat function, to intentionally not follow symbolic
-// links.
-//
-// Even though os.LStat provides all file mode bits, we want to ensure same
-// values returned to caller regardless of whether we obtained file mode bits
-// from syscall or stat call. Therefore mask out the additional file mode bits
-// that are provided by stat but not by the syscall, so users can rely on their
-// values.
-func modeType(osPathname string) (os.FileMode, error) {
- fi, err := os.Lstat(osPathname)
- if err == nil {
- return fi.Mode() & os.ModeType, nil
- }
- return 0, err
-}
+++ /dev/null
-// +build darwin dragonfly freebsd linux netbsd openbsd
-
-package godirwalk
-
-import (
- "os"
- "path/filepath"
- "syscall"
-)
-
-// modeTypeFromDirent converts a syscall defined constant, which is in purview
-// of OS, to a constant defined by Go, assumed by this project to be stable.
-//
-// When the syscall constant is not recognized, this function falls back to a
-// Stat on the file system.
-func modeTypeFromDirent(de *syscall.Dirent, osDirname, osBasename string) (os.FileMode, error) {
- switch de.Type {
- case syscall.DT_REG:
- return 0, nil
- case syscall.DT_DIR:
- return os.ModeDir, nil
- case syscall.DT_LNK:
- return os.ModeSymlink, nil
- case syscall.DT_CHR:
- return os.ModeDevice | os.ModeCharDevice, nil
- case syscall.DT_BLK:
- return os.ModeDevice, nil
- case syscall.DT_FIFO:
- return os.ModeNamedPipe, nil
- case syscall.DT_SOCK:
- return os.ModeSocket, nil
- default:
- // If syscall returned unknown type (e.g., DT_UNKNOWN, DT_WHT), then
- // resolve actual mode by reading file information.
- return modeType(filepath.Join(osDirname, osBasename))
- }
-}
+++ /dev/null
-// +build aix js nacl solaris
-
-package godirwalk
-
-import (
- "os"
- "path/filepath"
- "syscall"
-)
-
-// modeTypeFromDirent converts a syscall defined constant, which is in purview
-// of OS, to a constant defined by Go, assumed by this project to be stable.
-//
-// Because some operating system syscall.Dirent structures do not include a Type
-// field, fall back on Stat of the file system.
-func modeTypeFromDirent(_ *syscall.Dirent, osDirname, osBasename string) (os.FileMode, error) {
- return modeType(filepath.Join(osDirname, osBasename))
-}
+++ /dev/null
-// +build aix darwin dragonfly freebsd netbsd openbsd
-
-package godirwalk
-
-import (
- "reflect"
- "syscall"
- "unsafe"
-)
-
-func nameFromDirent(de *syscall.Dirent) []byte {
- // Because this GOOS' syscall.Dirent provides a Namlen field that says how
- // long the name is, this function does not need to search for the NULL
- // byte.
- ml := int(de.Namlen)
-
- // Convert syscall.Dirent.Name, which is array of int8, to []byte, by
- // overwriting Cap, Len, and Data slice header fields to values from
- // syscall.Dirent fields. Setting the Cap, Len, and Data field values for
- // the slice header modifies what the slice header points to, and in this
- // case, the name buffer.
- var name []byte
- sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
- sh.Cap = ml
- sh.Len = ml
- sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
-
- return name
-}
+++ /dev/null
-// +build nacl linux js solaris
-
-package godirwalk
-
-import (
- "bytes"
- "reflect"
- "syscall"
- "unsafe"
-)
-
-// nameOffset is a compile time constant
-const nameOffset = int(unsafe.Offsetof(syscall.Dirent{}.Name))
-
-func nameFromDirent(de *syscall.Dirent) (name []byte) {
- // Because this GOOS' syscall.Dirent does not provide a field that specifies
- // the name length, this function must first calculate the max possible name
- // length, and then search for the NULL byte.
- ml := int(de.Reclen) - nameOffset
-
- // Convert syscall.Dirent.Name, which is array of int8, to []byte, by
- // overwriting Cap, Len, and Data slice header fields to the max possible
- // name length computed above, and finding the terminating NULL byte.
- sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
- sh.Cap = ml
- sh.Len = ml
- sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
-
- if index := bytes.IndexByte(name, 0); index >= 0 {
- // Found NULL byte; set slice's cap and len accordingly.
- sh.Cap = index
- sh.Len = index
- return
- }
-
- // NOTE: This branch is not expected, but included for defensive
- // programming, and provides a hard stop on the name based on the structure
- // field array size.
- sh.Cap = len(de.Name)
- sh.Len = sh.Cap
- return
-}
+++ /dev/null
-package godirwalk
-
-// ReadDirents returns a sortable slice of pointers to Dirent structures, each
-// representing the file system name and mode type for one of the immediate
-// descendant of the specified directory. If the specified directory is a
-// symbolic link, it will be resolved.
-//
-// If an optional scratch buffer is provided that is at least one page of
-// memory, it will be used when reading directory entries from the file
-// system. If you plan on calling this function in a loop, you will have
-// significantly better performance if you allocate a scratch buffer and use it
-// each time you call this function.
-//
-// children, err := godirwalk.ReadDirents(osDirname, nil)
-// if err != nil {
-// return nil, errors.Wrap(err, "cannot get list of directory children")
-// }
-// sort.Sort(children)
-// for _, child := range children {
-// fmt.Printf("%s %s\n", child.ModeType, child.Name)
-// }
-func ReadDirents(osDirname string, scratchBuffer []byte) (Dirents, error) {
- return readDirents(osDirname, scratchBuffer)
-}
-
-// ReadDirnames returns a slice of strings, representing the immediate
-// descendants of the specified directory. If the specified directory is a
-// symbolic link, it will be resolved.
-//
-// If an optional scratch buffer is provided that is at least one page of
-// memory, it will be used when reading directory entries from the file
-// system. If you plan on calling this function in a loop, you will have
-// significantly better performance if you allocate a scratch buffer and use it
-// each time you call this function.
-//
-// Note that this function, depending on operating system, may or may not invoke
-// the ReadDirents function, in order to prepare the list of immediate
-// descendants. Therefore, if your program needs both the names and the file
-// system mode types of descendants, it will always be faster to invoke
-// ReadDirents directly, rather than calling this function, then looping over
-// the results and calling os.Stat or os.LStat for each entry.
-//
-// children, err := godirwalk.ReadDirnames(osDirname, nil)
-// if err != nil {
-// return nil, errors.Wrap(err, "cannot get list of directory children")
-// }
-// sort.Strings(children)
-// for _, child := range children {
-// fmt.Printf("%s\n", child)
-// }
-func ReadDirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
- return readDirnames(osDirname, scratchBuffer)
-}
+++ /dev/null
-// +build !windows
-
-package godirwalk
-
-import (
- "os"
- "syscall"
- "unsafe"
-)
-
-// MinimumScratchBufferSize specifies the minimum size of the scratch buffer
-// that ReadDirents, ReadDirnames, Scanner, and Walk will use when reading file
-// entries from the operating system. During program startup it is initialized
-// to the result from calling `os.Getpagesize()` for non Windows environments,
-// and 0 for Windows.
-var MinimumScratchBufferSize = os.Getpagesize()
-
-func newScratchBuffer() []byte { return make([]byte, MinimumScratchBufferSize) }
-
-func readDirents(osDirname string, scratchBuffer []byte) ([]*Dirent, error) {
- var entries []*Dirent
- var workBuffer []byte
-
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
- fd := int(dh.Fd())
-
- if len(scratchBuffer) < MinimumScratchBufferSize {
- scratchBuffer = newScratchBuffer()
- }
-
- var sde syscall.Dirent
- for {
- if len(workBuffer) == 0 {
- n, err := syscall.ReadDirent(fd, scratchBuffer)
- // n, err := unix.ReadDirent(fd, scratchBuffer)
- if err != nil {
- if err == syscall.EINTR /* || err == unix.EINTR */ {
- continue
- }
- _ = dh.Close()
- return nil, err
- }
- if n <= 0 { // end of directory: normal exit
- if err = dh.Close(); err != nil {
- return nil, err
- }
- return entries, nil
- }
- workBuffer = scratchBuffer[:n] // trim work buffer to number of bytes read
- }
-
- copy((*[unsafe.Sizeof(syscall.Dirent{})]byte)(unsafe.Pointer(&sde))[:], workBuffer)
- workBuffer = workBuffer[reclen(&sde):] // advance buffer for next iteration through loop
-
- if inoFromDirent(&sde) == 0 {
- continue // inode set to 0 indicates an entry that was marked as deleted
- }
-
- nameSlice := nameFromDirent(&sde)
- nameLength := len(nameSlice)
-
- if nameLength == 0 || (nameSlice[0] == '.' && (nameLength == 1 || (nameLength == 2 && nameSlice[1] == '.'))) {
- continue
- }
-
- childName := string(nameSlice)
- mt, err := modeTypeFromDirent(&sde, osDirname, childName)
- if err != nil {
- _ = dh.Close()
- return nil, err
- }
- entries = append(entries, &Dirent{name: childName, path: osDirname, modeType: mt})
- }
-}
-
-func readDirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
- var entries []string
- var workBuffer []byte
- var sde *syscall.Dirent
-
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
- fd := int(dh.Fd())
-
- if len(scratchBuffer) < MinimumScratchBufferSize {
- scratchBuffer = newScratchBuffer()
- }
-
- for {
- if len(workBuffer) == 0 {
- n, err := syscall.ReadDirent(fd, scratchBuffer)
- // n, err := unix.ReadDirent(fd, scratchBuffer)
- if err != nil {
- if err == syscall.EINTR /* || err == unix.EINTR */ {
- continue
- }
- _ = dh.Close()
- return nil, err
- }
- if n <= 0 { // end of directory: normal exit
- if err = dh.Close(); err != nil {
- return nil, err
- }
- return entries, nil
- }
- workBuffer = scratchBuffer[:n] // trim work buffer to number of bytes read
- }
-
- sde = (*syscall.Dirent)(unsafe.Pointer(&workBuffer[0])) // point entry to first syscall.Dirent in buffer
- // Handle first entry in the work buffer.
- workBuffer = workBuffer[reclen(sde):] // advance buffer for next iteration through loop
-
- if inoFromDirent(sde) == 0 {
- continue // inode set to 0 indicates an entry that was marked as deleted
- }
-
- nameSlice := nameFromDirent(sde)
- nameLength := len(nameSlice)
-
- if nameLength == 0 || (nameSlice[0] == '.' && (nameLength == 1 || (nameLength == 2 && nameSlice[1] == '.'))) {
- continue
- }
-
- entries = append(entries, string(nameSlice))
- }
-}
+++ /dev/null
-// +build windows
-
-package godirwalk
-
-import "os"
-
-// MinimumScratchBufferSize specifies the minimum size of the scratch buffer
-// that ReadDirents, ReadDirnames, Scanner, and Walk will use when reading file
-// entries from the operating system. During program startup it is initialized
-// to the result from calling `os.Getpagesize()` for non Windows environments,
-// and 0 for Windows.
-var MinimumScratchBufferSize = 0
-
-func newScratchBuffer() []byte { return nil }
-
-func readDirents(osDirname string, _ []byte) ([]*Dirent, error) {
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
-
- fileinfos, err := dh.Readdir(-1)
- if err != nil {
- _ = dh.Close()
- return nil, err
- }
-
- entries := make([]*Dirent, len(fileinfos))
-
- for i, fi := range fileinfos {
- entries[i] = &Dirent{
- name: fi.Name(),
- path: osDirname,
- modeType: fi.Mode() & os.ModeType,
- }
- }
-
- if err = dh.Close(); err != nil {
- return nil, err
- }
- return entries, nil
-}
-
-func readDirnames(osDirname string, _ []byte) ([]string, error) {
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
-
- fileinfos, err := dh.Readdir(-1)
- if err != nil {
- _ = dh.Close()
- return nil, err
- }
-
- entries := make([]string, len(fileinfos))
-
- for i, fi := range fileinfos {
- entries[i] = fi.Name()
- }
-
- if err = dh.Close(); err != nil {
- return nil, err
- }
- return entries, nil
-}
+++ /dev/null
-// +build dragonfly
-
-package godirwalk
-
-import "syscall"
-
-func reclen(de *syscall.Dirent) uint64 {
- return (16 + uint64(de.Namlen) + 1 + 7) &^ 7
-}
+++ /dev/null
-// +build nacl linux js solaris aix darwin freebsd netbsd openbsd
-
-package godirwalk
-
-import "syscall"
-
-func reclen(de *syscall.Dirent) uint64 {
- return uint64(de.Reclen)
-}
+++ /dev/null
-//go:build !windows
-// +build !windows
-
-package godirwalk
-
-import (
- "os"
- "syscall"
- "unsafe"
-)
-
-// Scanner is an iterator to enumerate the contents of a directory.
-type Scanner struct {
- scratchBuffer []byte // read directory bytes from file system into this buffer
- workBuffer []byte // points into scratchBuffer, from which we chunk out directory entries
- osDirname string
- childName string
- err error // err is the error associated with scanning directory
- statErr error // statErr is any error return while attempting to stat an entry
- dh *os.File // used to close directory after done reading
- de *Dirent // most recently decoded directory entry
- sde syscall.Dirent
- fd int // file descriptor used to read entries from directory
-}
-
-// NewScanner returns a new directory Scanner that lazily enumerates
-// the contents of a single directory. To prevent resource leaks,
-// caller must invoke either the Scanner's Close or Err method after
-// it has completed scanning a directory.
-//
-// scanner, err := godirwalk.NewScanner(dirname)
-// if err != nil {
-// fatal("cannot scan directory: %s", err)
-// }
-//
-// for scanner.Scan() {
-// dirent, err := scanner.Dirent()
-// if err != nil {
-// warning("cannot get dirent: %s", err)
-// continue
-// }
-// name := dirent.Name()
-// if name == "break" {
-// break
-// }
-// if name == "continue" {
-// continue
-// }
-// fmt.Printf("%v %v\n", dirent.ModeType(), dirent.Name())
-// }
-// if err := scanner.Err(); err != nil {
-// fatal("cannot scan directory: %s", err)
-// }
-func NewScanner(osDirname string) (*Scanner, error) {
- return NewScannerWithScratchBuffer(osDirname, nil)
-}
-
-// NewScannerWithScratchBuffer returns a new directory Scanner that
-// lazily enumerates the contents of a single directory. On platforms
-// other than Windows it uses the provided scratch buffer to read from
-// the file system. On Windows the scratch buffer is ignored. To
-// prevent resource leaks, caller must invoke either the Scanner's
-// Close or Err method after it has completed scanning a directory.
-func NewScannerWithScratchBuffer(osDirname string, scratchBuffer []byte) (*Scanner, error) {
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
- if len(scratchBuffer) < MinimumScratchBufferSize {
- scratchBuffer = newScratchBuffer()
- }
- scanner := &Scanner{
- scratchBuffer: scratchBuffer,
- osDirname: osDirname,
- dh: dh,
- fd: int(dh.Fd()),
- }
- return scanner, nil
-}
-
-// Close releases resources associated with scanning a directory. Call
-// either this or the Err method when the directory no longer needs to
-// be scanned.
-func (s *Scanner) Close() error {
- return s.Err()
-}
-
-// Dirent returns the current directory entry while scanning a directory.
-func (s *Scanner) Dirent() (*Dirent, error) {
- if s.de == nil {
- s.de = &Dirent{name: s.childName, path: s.osDirname}
- s.de.modeType, s.statErr = modeTypeFromDirent(&s.sde, s.osDirname, s.childName)
- }
- return s.de, s.statErr
-}
-
-// done is called when directory scanner unable to continue, with either the
-// triggering error, or nil when there are simply no more entries to read from
-// the directory.
-func (s *Scanner) done(err error) {
- if s.dh == nil {
- return
- }
-
- s.err = err
-
- if err = s.dh.Close(); s.err == nil {
- s.err = err
- }
-
- s.osDirname, s.childName = "", ""
- s.scratchBuffer, s.workBuffer = nil, nil
- s.dh, s.de, s.statErr = nil, nil, nil
- s.sde = syscall.Dirent{}
- s.fd = 0
-}
-
-// Err returns any error associated with scanning a directory. It is
-// normal to call Err after Scan returns false, even though they both
-// ensure Scanner resources are released. Call either this or the
-// Close method when the directory no longer needs to be scanned.
-func (s *Scanner) Err() error {
- s.done(nil)
- return s.err
-}
-
-// Name returns the base name of the current directory entry while scanning a
-// directory.
-func (s *Scanner) Name() string { return s.childName }
-
-// Scan potentially reads and then decodes the next directory entry from the
-// file system.
-//
-// When it returns false, this releases resources used by the Scanner then
-// returns any error associated with closing the file system directory resource.
-func (s *Scanner) Scan() bool {
- if s.dh == nil {
- return false
- }
-
- s.de = nil
-
- for {
- // When the work buffer has nothing remaining to decode, we need to load
- // more data from disk.
- if len(s.workBuffer) == 0 {
- n, err := syscall.ReadDirent(s.fd, s.scratchBuffer)
- // n, err := unix.ReadDirent(s.fd, s.scratchBuffer)
- if err != nil {
- if err == syscall.EINTR /* || err == unix.EINTR */ {
- continue
- }
- s.done(err) // any other error forces a stop
- return false
- }
- if n <= 0 { // end of directory: normal exit
- s.done(nil)
- return false
- }
- s.workBuffer = s.scratchBuffer[:n] // trim work buffer to number of bytes read
- }
-
- // point entry to first syscall.Dirent in buffer
- copy((*[unsafe.Sizeof(syscall.Dirent{})]byte)(unsafe.Pointer(&s.sde))[:], s.workBuffer)
- s.workBuffer = s.workBuffer[reclen(&s.sde):] // advance buffer for next iteration through loop
-
- if inoFromDirent(&s.sde) == 0 {
- continue // inode set to 0 indicates an entry that was marked as deleted
- }
-
- nameSlice := nameFromDirent(&s.sde)
- nameLength := len(nameSlice)
-
- if nameLength == 0 || (nameSlice[0] == '.' && (nameLength == 1 || (nameLength == 2 && nameSlice[1] == '.'))) {
- continue
- }
-
- s.childName = string(nameSlice)
- return true
- }
-}
+++ /dev/null
-//go:build windows
-// +build windows
-
-package godirwalk
-
-import (
- "fmt"
- "os"
-)
-
-// Scanner is an iterator to enumerate the contents of a directory.
-type Scanner struct {
- osDirname string
- childName string
- dh *os.File // dh is handle to open directory
- de *Dirent
- err error // err is the error associated with scanning directory
- childMode os.FileMode
-}
-
-// NewScanner returns a new directory Scanner that lazily enumerates
-// the contents of a single directory. To prevent resource leaks,
-// caller must invoke either the Scanner's Close or Err method after
-// it has completed scanning a directory.
-//
-// scanner, err := godirwalk.NewScanner(dirname)
-// if err != nil {
-// fatal("cannot scan directory: %s", err)
-// }
-//
-// for scanner.Scan() {
-// dirent, err := scanner.Dirent()
-// if err != nil {
-// warning("cannot get dirent: %s", err)
-// continue
-// }
-// name := dirent.Name()
-// if name == "break" {
-// break
-// }
-// if name == "continue" {
-// continue
-// }
-// fmt.Printf("%v %v\n", dirent.ModeType(), dirent.Name())
-// }
-// if err := scanner.Err(); err != nil {
-// fatal("cannot scan directory: %s", err)
-// }
-func NewScanner(osDirname string) (*Scanner, error) {
- dh, err := os.Open(osDirname)
- if err != nil {
- return nil, err
- }
- scanner := &Scanner{
- osDirname: osDirname,
- dh: dh,
- }
- return scanner, nil
-}
-
-// NewScannerWithScratchBuffer returns a new directory Scanner that
-// lazily enumerates the contents of a single directory. On platforms
-// other than Windows it uses the provided scratch buffer to read from
-// the file system. On Windows the scratch buffer parameter is
-// ignored. To prevent resource leaks, caller must invoke either the
-// Scanner's Close or Err method after it has completed scanning a
-// directory.
-func NewScannerWithScratchBuffer(osDirname string, scratchBuffer []byte) (*Scanner, error) {
- return NewScanner(osDirname)
-}
-
-// Close releases resources associated with scanning a directory. Call
-// either this or the Err method when the directory no longer needs to
-// be scanned.
-func (s *Scanner) Close() error {
- return s.Err()
-}
-
-// Dirent returns the current directory entry while scanning a directory.
-func (s *Scanner) Dirent() (*Dirent, error) {
- if s.de == nil {
- s.de = &Dirent{
- name: s.childName,
- path: s.osDirname,
- modeType: s.childMode,
- }
- }
- return s.de, nil
-}
-
-// done is called when directory scanner unable to continue, with either the
-// triggering error, or nil when there are simply no more entries to read from
-// the directory.
-func (s *Scanner) done(err error) {
- if s.dh == nil {
- return
- }
-
- s.err = err
-
- if err = s.dh.Close(); s.err == nil {
- s.err = err
- }
-
- s.childName, s.osDirname = "", ""
- s.de, s.dh = nil, nil
-}
-
-// Err returns any error associated with scanning a directory. It is
-// normal to call Err after Scan returns false, even though they both
-// ensure Scanner resources are released. Call either this or the
-// Close method when the directory no longer needs to be scanned.
-func (s *Scanner) Err() error {
- s.done(nil)
- return s.err
-}
-
-// Name returns the base name of the current directory entry while scanning a
-// directory.
-func (s *Scanner) Name() string { return s.childName }
-
-// Scan potentially reads and then decodes the next directory entry from the
-// file system.
-//
-// When it returns false, this releases resources used by the Scanner then
-// returns any error associated with closing the file system directory resource.
-func (s *Scanner) Scan() bool {
- if s.dh == nil {
- return false
- }
-
- s.de = nil
-
- fileinfos, err := s.dh.Readdir(1)
- if err != nil {
- s.done(err)
- return false
- }
-
- if l := len(fileinfos); l != 1 {
- s.done(fmt.Errorf("expected a single entry rather than %d", l))
- return false
- }
-
- fi := fileinfos[0]
- s.childMode = fi.Mode() & os.ModeType
- s.childName = fi.Name()
- return true
-}
+++ /dev/null
-package godirwalk
-
-import "sort"
-
-type scanner interface {
- Dirent() (*Dirent, error)
- Err() error
- Name() string
- Scan() bool
-}
-
-// sortedScanner enumerates through a directory's contents after reading the
-// entire directory and sorting the entries by name. Used by walk to simplify
-// its implementation.
-type sortedScanner struct {
- dd []*Dirent
- de *Dirent
-}
-
-func newSortedScanner(osPathname string, scratchBuffer []byte) (*sortedScanner, error) {
- deChildren, err := ReadDirents(osPathname, scratchBuffer)
- if err != nil {
- return nil, err
- }
- sort.Sort(deChildren)
- return &sortedScanner{dd: deChildren}, nil
-}
-
-func (d *sortedScanner) Err() error {
- d.dd, d.de = nil, nil
- return nil
-}
-
-func (d *sortedScanner) Dirent() (*Dirent, error) { return d.de, nil }
-
-func (d *sortedScanner) Name() string { return d.de.name }
-
-func (d *sortedScanner) Scan() bool {
- if len(d.dd) > 0 {
- d.de, d.dd = d.dd[0], d.dd[1:]
- return true
- }
- return false
-}
+++ /dev/null
-package godirwalk
-
-import (
- "errors"
- "fmt"
- "os"
- "path/filepath"
-)
-
-// Options provide parameters for how the Walk function operates.
-type Options struct {
- // ErrorCallback specifies a function to be invoked in the case of an error
- // that could potentially be ignored while walking a file system
- // hierarchy. When set to nil or left as its zero-value, any error condition
- // causes Walk to immediately return the error describing what took
- // place. When non-nil, this user supplied function is invoked with the OS
- // pathname of the file system object that caused the error along with the
- // error that took place. The return value of the supplied ErrorCallback
- // function determines whether the error will cause Walk to halt immediately
- // as it would were no ErrorCallback value provided, or skip this file
- // system node yet continue on with the remaining nodes in the file system
- // hierarchy.
- //
- // ErrorCallback is invoked both for errors that are returned by the
- // runtime, and for errors returned by other user supplied callback
- // functions.
- ErrorCallback func(string, error) ErrorAction
-
- // FollowSymbolicLinks specifies whether Walk will follow symbolic links
- // that refer to directories. When set to false or left as its zero-value,
- // Walk will still invoke the callback function with symbolic link nodes,
- // but if the symbolic link refers to a directory, it will not recurse on
- // that directory. When set to true, Walk will recurse on symbolic links
- // that refer to a directory.
- FollowSymbolicLinks bool
-
- // Unsorted controls whether or not Walk will sort the immediate descendants
- // of a directory by their relative names prior to visiting each of those
- // entries.
- //
- // When set to false or left at its zero-value, Walk will get the list of
- // immediate descendants of a particular directory, sort that list by
- // lexical order of their names, and then visit each node in the list in
- // sorted order. This will cause Walk to always traverse the same directory
- // tree in the same order, however may be inefficient for directories with
- // many immediate descendants.
- //
- // When set to true, Walk skips sorting the list of immediate descendants
- // for a directory, and simply visits each node in the order the operating
- // system enumerated them. This will be more fast, but with the side effect
- // that the traversal order may be different from one invocation to the
- // next.
- Unsorted bool
-
- // Callback is a required function that Walk will invoke for every file
- // system node it encounters.
- Callback WalkFunc
-
- // PostChildrenCallback is an option function that Walk will invoke for
- // every file system directory it encounters after its children have been
- // processed.
- PostChildrenCallback WalkFunc
-
- // ScratchBuffer is an optional byte slice to use as a scratch buffer for
- // Walk to use when reading directory entries, to reduce amount of garbage
- // generation. Not all architectures take advantage of the scratch
- // buffer. If omitted or the provided buffer has fewer bytes than
- // MinimumScratchBufferSize, then a buffer with MinimumScratchBufferSize
- // bytes will be created and used once per Walk invocation.
- ScratchBuffer []byte
-
- // AllowNonDirectory causes Walk to bypass the check that ensures it is
- // being called on a directory node, or when FollowSymbolicLinks is true, a
- // symbolic link that points to a directory. Leave this value false to have
- // Walk return an error when called on a non-directory. Set this true to
- // have Walk run even when called on a non-directory node.
- AllowNonDirectory bool
-}
-
-// ErrorAction defines a set of actions the Walk function could take based on
-// the occurrence of an error while walking the file system. See the
-// documentation for the ErrorCallback field of the Options structure for more
-// information.
-type ErrorAction int
-
-const (
- // Halt is the ErrorAction return value when the upstream code wants to halt
- // the walk process when a runtime error takes place. It matches the default
- // action the Walk function would take were no ErrorCallback provided.
- Halt ErrorAction = iota
-
- // SkipNode is the ErrorAction return value when the upstream code wants to
- // ignore the runtime error for the current file system node, skip
- // processing of the node that caused the error, and continue walking the
- // file system hierarchy with the remaining nodes.
- SkipNode
-)
-
-// SkipThis is used as a return value from WalkFuncs to indicate that the file
-// system entry named in the call is to be skipped. It is not returned as an
-// error by any function.
-var SkipThis = errors.New("skip this directory entry")
-
-// WalkFunc is the type of the function called for each file system node visited
-// by Walk. The pathname argument will contain the argument to Walk as a prefix;
-// that is, if Walk is called with "dir", which is a directory containing the
-// file "a", the provided WalkFunc will be invoked with the argument "dir/a",
-// using the correct os.PathSeparator for the Go Operating System architecture,
-// GOOS. The directory entry argument is a pointer to a Dirent for the node,
-// providing access to both the basename and the mode type of the file system
-// node.
-//
-// If an error is returned by the Callback or PostChildrenCallback functions,
-// and no ErrorCallback function is provided, processing stops. If an
-// ErrorCallback function is provided, then it is invoked with the OS pathname
-// of the node that caused the error along along with the error. The return
-// value of the ErrorCallback function determines whether to halt processing, or
-// skip this node and continue processing remaining file system nodes.
-//
-// The exception is when the function returns the special value
-// filepath.SkipDir. If the function returns filepath.SkipDir when invoked on a
-// directory, Walk skips the directory's contents entirely. If the function
-// returns filepath.SkipDir when invoked on a non-directory file system node,
-// Walk skips the remaining files in the containing directory. Note that any
-// supplied ErrorCallback function is not invoked with filepath.SkipDir when the
-// Callback or PostChildrenCallback functions return that special value.
-//
-// One arguably confusing aspect of the filepath.WalkFunc API that this library
-// must emulate is how a caller tells Walk to skip file system entries or
-// directories. With both filepath.Walk and this Walk, when a callback function
-// wants to skip a directory and not descend into its children, it returns
-// filepath.SkipDir. If the callback function returns filepath.SkipDir for a
-// non-directory, filepath.Walk and this library will stop processing any more
-// entries in the current directory, which is what many people do not want. If
-// you want to simply skip a particular non-directory entry but continue
-// processing entries in the directory, a callback function must return nil. The
-// implications of this API is when you want to walk a file system hierarchy and
-// skip an entry, when the entry is a directory, you must return one value,
-// namely filepath.SkipDir, but when the entry is a non-directory, you must
-// return a different value, namely nil. In other words, to get identical
-// behavior for two file system entry types you need to send different token
-// values.
-//
-// Here is an example callback function that adheres to filepath.Walk API to
-// have it skip any file system entry whose full pathname includes a particular
-// substring, optSkip:
-//
-// func callback1(osPathname string, de *godirwalk.Dirent) error {
-// if optSkip != "" && strings.Contains(osPathname, optSkip) {
-// if b, err := de.IsDirOrSymlinkToDir(); b == true && err == nil {
-// return filepath.SkipDir
-// }
-// return nil
-// }
-// // Process file like normal...
-// return nil
-// }
-//
-// This library attempts to eliminate some of that logic boilerplate by
-// providing a new token error value, SkipThis, which a callback function may
-// return to skip the current file system entry regardless of what type of entry
-// it is. If the current entry is a directory, its children will not be
-// enumerated, exactly as if the callback returned filepath.SkipDir. If the
-// current entry is a non-directory, the next file system entry in the current
-// directory will be enumerated, exactly as if the callback returned nil. The
-// following example callback function has identical behavior as the previous,
-// but has less boilerplate, and admittedly more simple logic.
-//
-// func callback2(osPathname string, de *godirwalk.Dirent) error {
-// if optSkip != "" && strings.Contains(osPathname, optSkip) {
-// return godirwalk.SkipThis
-// }
-// // Process file like normal...
-// return nil
-// }
-type WalkFunc func(osPathname string, directoryEntry *Dirent) error
-
-// Walk walks the file tree rooted at the specified directory, calling the
-// specified callback function for each file system node in the tree, including
-// root, symbolic links, and other node types.
-//
-// This function is often much faster than filepath.Walk because it does not
-// invoke os.Stat for every node it encounters, but rather obtains the file
-// system node type when it reads the parent directory.
-//
-// If a runtime error occurs, either from the operating system or from the
-// upstream Callback or PostChildrenCallback functions, processing typically
-// halts. However, when an ErrorCallback function is provided in the provided
-// Options structure, that function is invoked with the error along with the OS
-// pathname of the file system node that caused the error. The ErrorCallback
-// function's return value determines the action that Walk will then take.
-//
-// func main() {
-// dirname := "."
-// if len(os.Args) > 1 {
-// dirname = os.Args[1]
-// }
-// err := godirwalk.Walk(dirname, &godirwalk.Options{
-// Callback: func(osPathname string, de *godirwalk.Dirent) error {
-// fmt.Printf("%s %s\n", de.ModeType(), osPathname)
-// return nil
-// },
-// ErrorCallback: func(osPathname string, err error) godirwalk.ErrorAction {
-// // Your program may want to log the error somehow.
-// fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
-//
-// // For the purposes of this example, a simple SkipNode will suffice,
-// // although in reality perhaps additional logic might be called for.
-// return godirwalk.SkipNode
-// },
-// })
-// if err != nil {
-// fmt.Fprintf(os.Stderr, "%s\n", err)
-// os.Exit(1)
-// }
-// }
-func Walk(pathname string, options *Options) error {
- if options == nil || options.Callback == nil {
- return errors.New("cannot walk without non-nil options and Callback function")
- }
-
- pathname = filepath.Clean(pathname)
-
- var fi os.FileInfo
- var err error
-
- if options.FollowSymbolicLinks {
- fi, err = os.Stat(pathname)
- } else {
- fi, err = os.Lstat(pathname)
- }
- if err != nil {
- return err
- }
-
- mode := fi.Mode()
- if !options.AllowNonDirectory && mode&os.ModeDir == 0 {
- return fmt.Errorf("cannot Walk non-directory: %s", pathname)
- }
-
- dirent := &Dirent{
- name: filepath.Base(pathname),
- path: filepath.Dir(pathname),
- modeType: mode & os.ModeType,
- }
-
- if len(options.ScratchBuffer) < MinimumScratchBufferSize {
- options.ScratchBuffer = newScratchBuffer()
- }
-
- // If ErrorCallback is nil, set to a default value that halts the walk
- // process on all operating system errors. This is done to allow error
- // handling to be more succinct in the walk code.
- if options.ErrorCallback == nil {
- options.ErrorCallback = defaultErrorCallback
- }
-
- err = walk(pathname, dirent, options)
- switch err {
- case nil, SkipThis, filepath.SkipDir:
- // silence SkipThis and filepath.SkipDir for top level
- debug("no error of significance: %v\n", err)
- return nil
- default:
- return err
- }
-}
-
-// defaultErrorCallback always returns Halt because if the upstream code did not
-// provide an ErrorCallback function, walking the file system hierarchy ought to
-// halt upon any operating system error.
-func defaultErrorCallback(_ string, _ error) ErrorAction { return Halt }
-
-// walk recursively traverses the file system node specified by pathname and the
-// Dirent.
-func walk(osPathname string, dirent *Dirent, options *Options) error {
- err := options.Callback(osPathname, dirent)
- if err != nil {
- if err == SkipThis || err == filepath.SkipDir {
- return err
- }
- if action := options.ErrorCallback(osPathname, err); action == SkipNode {
- return nil
- }
- return err
- }
-
- if dirent.IsSymlink() {
- if !options.FollowSymbolicLinks {
- return nil
- }
- // Does this symlink point to a directory?
- info, err := os.Stat(osPathname)
- if err != nil {
- if action := options.ErrorCallback(osPathname, err); action == SkipNode {
- return nil
- }
- return err
- }
- if !info.IsDir() {
- return nil
- }
- } else if !dirent.IsDir() {
- return nil
- }
-
- // If get here, then specified pathname refers to a directory or a
- // symbolic link to a directory.
-
- var ds scanner
-
- if options.Unsorted {
- // When upstream does not request a sorted iteration, it's more memory
- // efficient to read a single child at a time from the file system.
- ds, err = NewScanner(osPathname)
- } else {
- // When upstream wants a sorted iteration, we must read the entire
- // directory and sort through the child names, and then iterate on each
- // child.
- ds, err = newSortedScanner(osPathname, options.ScratchBuffer)
- }
- if err != nil {
- if action := options.ErrorCallback(osPathname, err); action == SkipNode {
- return nil
- }
- return err
- }
-
- for ds.Scan() {
- deChild, err := ds.Dirent()
- osChildname := filepath.Join(osPathname, deChild.name)
- if err != nil {
- if action := options.ErrorCallback(osChildname, err); action == SkipNode {
- return nil
- }
- return err
- }
- err = walk(osChildname, deChild, options)
- debug("osChildname: %q; error: %v\n", osChildname, err)
- if err == nil || err == SkipThis {
- continue
- }
- if err != filepath.SkipDir {
- return err
- }
- // When received SkipDir on a directory or a symbolic link to a
- // directory, stop processing that directory but continue processing
- // siblings. When received on a non-directory, stop processing
- // remaining siblings.
- isDir, err := deChild.IsDirOrSymlinkToDir()
- if err != nil {
- if action := options.ErrorCallback(osChildname, err); action == SkipNode {
- continue // ignore and continue with next sibling
- }
- return err // caller does not approve of this error
- }
- if !isDir {
- break // stop processing remaining siblings, but allow post children callback
- }
- // continue processing remaining siblings
- }
- if err = ds.Err(); err != nil {
- return err
- }
-
- if options.PostChildrenCallback == nil {
- return nil
- }
-
- err = options.PostChildrenCallback(osPathname, dirent)
- if err == nil || err == filepath.SkipDir {
- return err
- }
-
- if action := options.ErrorCallback(osPathname, err); action == SkipNode {
- return nil
- }
- return err
-}
+++ /dev/null
-language: go
-dist: trusty
-sudo: required
-cache:
- directories:
- - $HOME/.ccache
- - $HOME/zfs
-
-branches:
- only:
- - master
-
-env:
- - rel=0.6.5.11
- - rel=0.7.6
-
-go:
- - "1.10.x"
- - master
-
-before_install:
- - export MAKEFLAGS=-j$(($(grep -c '^processor' /proc/cpuinfo) * 2 + 1))
- - export PATH=/usr/lib/ccache:$PATH
- - go get github.com/alecthomas/gometalinter
- - gometalinter --install --update
- - sudo apt-get update -y && sudo apt-get install -y libattr1-dev libblkid-dev linux-headers-$(uname -r) tree uuid-dev
- - mkdir -p $HOME/zfs
- - cd $HOME/zfs
- - [[ -d spl-$rel.tar.gz ]] || curl -L https://github.com/zfsonlinux/zfs/releases/download/zfs-$rel/spl-$rel.tar.gz | tar xz
- - [[ -d zfs-$rel.tar.gz ]] || curl -L https://github.com/zfsonlinux/zfs/releases/download/zfs-$rel/zfs-$rel.tar.gz | tar xz
- - (cd spl-$rel && ./configure --prefix=/usr && make && sudo make install)
- - (cd zfs-$rel && ./configure --prefix=/usr && make && sudo make install)
- - sudo modprobe zfs
- - cd $TRAVIS_BUILD_DIR
-
-script:
- - sudo -E $(which go) test -v ./...
- - gometalinter --vendor --vendored-linters ./... || true
- - gometalinter --errors --vendor --vendored-linters ./...
-
-notifications:
- email: false
- irc: "chat.freenode.net#cerana"
+++ /dev/null
-## How to Contribute ##
-
-We always welcome contributions to help make `go-zfs` better. Please take a moment to read this document if you would like to contribute.
-
-### Reporting issues ###
-
-We use [Github issues](https://github.com/mistifyio/go-zfs/issues) to track bug reports, feature requests, and submitting pull requests.
-
-If you find a bug:
-
-* Use the GitHub issue search to check whether the bug has already been reported.
-* If the issue has been fixed, try to reproduce the issue using the latest `master` branch of the repository.
-* If the issue still reproduces or has not yet been reported, try to isolate the problem before opening an issue, if possible. Also provide the steps taken to reproduce the bug.
-
-### Pull requests ###
-
-We welcome bug fixes, improvements, and new features. Before embarking on making significant changes, please open an issue and ask first so that you do not risk duplicating efforts or spending time working on something that may be out of scope. For minor items, just open a pull request.
-
-[Fork the project](https://help.github.com/articles/fork-a-repo), clone your fork, and add the upstream to your remote:
-
- $ git clone git@github.com:<your-username>/go-zfs.git
- $ cd go-zfs
- $ git remote add upstream https://github.com/mistifyio/go-zfs.git
-
-If you need to pull new changes committed upstream:
-
- $ git checkout master
- $ git fetch upstream
- $ git merge upstream/master
-
-Don' work directly on master as this makes it harder to merge later. Create a feature branch for your fix or new feature:
-
- $ git checkout -b <feature-branch-name>
-
-Please try to commit your changes in logical chunks. Ideally, you should include the issue number in the commit message.
-
- $ git commit -m "Issue #<issue-number> - <commit-message>"
-
-Push your feature branch to your fork.
-
- $ git push origin <feature-branch-name>
-
-[Open a Pull Request](https://help.github.com/articles/using-pull-requests) against the upstream master branch. Please give your pull request a clear title and description and note which issue(s) your pull request fixes.
-
-* All Go code should be formatted using [gofmt](http://golang.org/cmd/gofmt/).
-* Every exported function should have [documentation](http://blog.golang.org/godoc-documenting-go-code) and corresponding [tests](http://golang.org/doc/code.html#Testing).
-
-**Important:** By submitting a patch, you agree to allow the project owners to license your work under the [Apache 2.0 License](./LICENSE).
-
-### Go Tools ###
-For consistency and to catch minor issues for all of go code, please run the following:
-* goimports
-* go vet
-* golint
-* errcheck
-
-Many editors can execute the above on save.
-
-----
-Guidelines based on http://azkaban.github.io/contributing.html
+++ /dev/null
-Apache License
- Version 2.0, January 2004
- http://www.apache.org/licenses/
-
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
- 1. Definitions.
-
- "License" shall mean the terms and conditions for use, reproduction,
- and distribution as defined by Sections 1 through 9 of this document.
-
- "Licensor" shall mean the copyright owner or entity authorized by
- the copyright owner that is granting the License.
-
- "Legal Entity" shall mean the union of the acting entity and all
- other entities that control, are controlled by, or are under common
- control with that entity. For the purposes of this definition,
- "control" means (i) the power, direct or indirect, to cause the
- direction or management of such entity, whether by contract or
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
- outstanding shares, or (iii) beneficial ownership of such entity.
-
- "You" (or "Your") shall mean an individual or Legal Entity
- exercising permissions granted by this License.
-
- "Source" form shall mean the preferred form for making modifications,
- including but not limited to software source code, documentation
- source, and configuration files.
-
- "Object" form shall mean any form resulting from mechanical
- transformation or translation of a Source form, including but
- not limited to compiled object code, generated documentation,
- and conversions to other media types.
-
- "Work" shall mean the work of authorship, whether in Source or
- Object form, made available under the License, as indicated by a
- copyright notice that is included in or attached to the work
- (an example is provided in the Appendix below).
-
- "Derivative Works" shall mean any work, whether in Source or Object
- form, that is based on (or derived from) the Work and for which the
- editorial revisions, annotations, elaborations, or other modifications
- represent, as a whole, an original work of authorship. For the purposes
- of this License, Derivative Works shall not include works that remain
- separable from, or merely link (or bind by name) to the interfaces of,
- the Work and Derivative Works thereof.
-
- "Contribution" shall mean any work of authorship, including
- the original version of the Work and any modifications or additions
- to that Work or Derivative Works thereof, that is intentionally
- submitted to Licensor for inclusion in the Work by the copyright owner
- or by an individual or Legal Entity authorized to submit on behalf of
- the copyright owner. For the purposes of this definition, "submitted"
- means any form of electronic, verbal, or written communication sent
- to the Licensor or its representatives, including but not limited to
- communication on electronic mailing lists, source code control systems,
- and issue tracking systems that are managed by, or on behalf of, the
- Licensor for the purpose of discussing and improving the Work, but
- excluding communication that is conspicuously marked or otherwise
- designated in writing by the copyright owner as "Not a Contribution."
-
- "Contributor" shall mean Licensor and any individual or Legal Entity
- on behalf of whom a Contribution has been received by Licensor and
- subsequently incorporated within the Work.
-
- 2. Grant of Copyright License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- copyright license to reproduce, prepare Derivative Works of,
- publicly display, publicly perform, sublicense, and distribute the
- Work and such Derivative Works in Source or Object form.
-
- 3. Grant of Patent License. Subject to the terms and conditions of
- this License, each Contributor hereby grants to You a perpetual,
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
- (except as stated in this section) patent license to make, have made,
- use, offer to sell, sell, import, and otherwise transfer the Work,
- where such license applies only to those patent claims licensable
- by such Contributor that are necessarily infringed by their
- Contribution(s) alone or by combination of their Contribution(s)
- with the Work to which such Contribution(s) was submitted. If You
- institute patent litigation against any entity (including a
- cross-claim or counterclaim in a lawsuit) alleging that the Work
- or a Contribution incorporated within the Work constitutes direct
- or contributory patent infringement, then any patent licenses
- granted to You under this License for that Work shall terminate
- as of the date such litigation is filed.
-
- 4. Redistribution. You may reproduce and distribute copies of the
- Work or Derivative Works thereof in any medium, with or without
- modifications, and in Source or Object form, provided that You
- meet the following conditions:
-
- (a) You must give any other recipients of the Work or
- Derivative Works a copy of this License; and
-
- (b) You must cause any modified files to carry prominent notices
- stating that You changed the files; and
-
- (c) You must retain, in the Source form of any Derivative Works
- that You distribute, all copyright, patent, trademark, and
- attribution notices from the Source form of the Work,
- excluding those notices that do not pertain to any part of
- the Derivative Works; and
-
- (d) If the Work includes a "NOTICE" text file as part of its
- distribution, then any Derivative Works that You distribute must
- include a readable copy of the attribution notices contained
- within such NOTICE file, excluding those notices that do not
- pertain to any part of the Derivative Works, in at least one
- of the following places: within a NOTICE text file distributed
- as part of the Derivative Works; within the Source form or
- documentation, if provided along with the Derivative Works; or,
- within a display generated by the Derivative Works, if and
- wherever such third-party notices normally appear. The contents
- of the NOTICE file are for informational purposes only and
- do not modify the License. You may add Your own attribution
- notices within Derivative Works that You distribute, alongside
- or as an addendum to the NOTICE text from the Work, provided
- that such additional attribution notices cannot be construed
- as modifying the License.
-
- You may add Your own copyright statement to Your modifications and
- may provide additional or different license terms and conditions
- for use, reproduction, or distribution of Your modifications, or
- for any such Derivative Works as a whole, provided Your use,
- reproduction, and distribution of the Work otherwise complies with
- the conditions stated in this License.
-
- 5. Submission of Contributions. Unless You explicitly state otherwise,
- any Contribution intentionally submitted for inclusion in the Work
- by You to the Licensor shall be under the terms and conditions of
- this License, without any additional terms or conditions.
- Notwithstanding the above, nothing herein shall supersede or modify
- the terms of any separate license agreement you may have executed
- with Licensor regarding such Contributions.
-
- 6. Trademarks. This License does not grant permission to use the trade
- names, trademarks, service marks, or product names of the Licensor,
- except as required for reasonable and customary use in describing the
- origin of the Work and reproducing the content of the NOTICE file.
-
- 7. Disclaimer of Warranty. Unless required by applicable law or
- agreed to in writing, Licensor provides the Work (and each
- Contributor provides its Contributions) on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
- implied, including, without limitation, any warranties or conditions
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
- PARTICULAR PURPOSE. You are solely responsible for determining the
- appropriateness of using or redistributing the Work and assume any
- risks associated with Your exercise of permissions under this License.
-
- 8. Limitation of Liability. In no event and under no legal theory,
- whether in tort (including negligence), contract, or otherwise,
- unless required by applicable law (such as deliberate and grossly
- negligent acts) or agreed to in writing, shall any Contributor be
- liable to You for damages, including any direct, indirect, special,
- incidental, or consequential damages of any character arising as a
- result of this License or out of the use or inability to use the
- Work (including but not limited to damages for loss of goodwill,
- work stoppage, computer failure or malfunction, or any and all
- other commercial damages or losses), even if such Contributor
- has been advised of the possibility of such damages.
-
- 9. Accepting Warranty or Additional Liability. While redistributing
- the Work or Derivative Works thereof, You may choose to offer,
- and charge a fee for, acceptance of support, warranty, indemnity,
- or other liability obligations and/or rights consistent with this
- License. However, in accepting such obligations, You may act only
- on Your own behalf and on Your sole responsibility, not on behalf
- of any other Contributor, and only if You agree to indemnify,
- defend, and hold each Contributor harmless for any liability
- incurred by, or claims asserted against, such Contributor by reason
- of your accepting any such warranty or additional liability.
-
- END OF TERMS AND CONDITIONS
-
- APPENDIX: How to apply the Apache License to your work.
-
- To apply the Apache License to your work, attach the following
- boilerplate notice, with the fields enclosed by brackets "{}"
- replaced with your own identifying information. (Don't include
- the brackets!) The text should be enclosed in the appropriate
- comment syntax for the file format. We also recommend that a
- file or class name and description of purpose be included on the
- same "printed page" as the copyright notice for easier
- identification within third-party archives.
-
- Copyright (c) 2014, OmniTI Computer Consulting, Inc.
-
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
\ No newline at end of file
+++ /dev/null
-# Go Wrapper for ZFS #
-
-Simple wrappers for ZFS command line tools.
-
-[](https://godoc.org/github.com/mistifyio/go-zfs)
-
-## Requirements ##
-
-You need a working ZFS setup. To use on Ubuntu 14.04, setup ZFS:
-
- sudo apt-get install python-software-properties
- sudo apt-add-repository ppa:zfs-native/stable
- sudo apt-get update
- sudo apt-get install ubuntu-zfs libzfs-dev
-
-Developed using Go 1.3, but currently there isn't anything 1.3 specific. Don't use Ubuntu packages for Go, use http://golang.org/doc/install
-
-Generally you need root privileges to use anything zfs related.
-
-## Status ##
-
-This has been only been tested on Ubuntu 14.04
-
-In the future, we hope to work directly with libzfs.
-
-# Hacking #
-
-The tests have decent examples for most functions.
-
-```go
-//assuming a zpool named test
-//error handling omitted
-
-
-f, err := zfs.CreateFilesystem("test/snapshot-test", nil)
-ok(t, err)
-
-s, err := f.Snapshot("test", nil)
-ok(t, err)
-
-// snapshot is named "test/snapshot-test@test"
-
-c, err := s.Clone("test/clone-test", nil)
-
-err := c.Destroy()
-err := s.Destroy()
-err := f.Destroy()
-
-```
-
-# Contributing #
-
-See the [contributing guidelines](./CONTRIBUTING.md)
-
+++ /dev/null
-
-VAGRANTFILE_API_VERSION = "2"
-
-Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
- config.vm.box = "ubuntu/trusty64"
- config.ssh.forward_agent = true
-
- config.vm.synced_folder ".", "/home/vagrant/go/src/github.com/mistifyio/go-zfs", create: true
-
- config.vm.provision "shell", inline: <<EOF
-cat << END > /etc/profile.d/go.sh
-export GOPATH=\\$HOME/go
-export PATH=\\$GOPATH/bin:/usr/local/go/bin:\\$PATH
-END
-
-chown -R vagrant /home/vagrant/go
-
-apt-get update
-apt-get install -y software-properties-common curl
-apt-add-repository --yes ppa:zfs-native/stable
-apt-get update
-apt-get install -y ubuntu-zfs
-
-cd /home/vagrant
-curl -z go1.3.3.linux-amd64.tar.gz -L -O https://storage.googleapis.com/golang/go1.3.3.linux-amd64.tar.gz
-tar -C /usr/local -zxf /home/vagrant/go1.3.3.linux-amd64.tar.gz
-
-cat << END > /etc/sudoers.d/go
-Defaults env_keep += "GOPATH"
-END
-
-EOF
-
-end
+++ /dev/null
-package zfs
-
-import (
- "fmt"
-)
-
-// Error is an error which is returned when the `zfs` or `zpool` shell
-// commands return with a non-zero exit code.
-type Error struct {
- Err error
- Debug string
- Stderr string
-}
-
-// Error returns the string representation of an Error.
-func (e Error) Error() string {
- return fmt.Sprintf("%s: %q => %s", e.Err, e.Debug, e.Stderr)
-}
+++ /dev/null
-package zfs
-
-import (
- "bytes"
- "errors"
- "fmt"
- "io"
- "os/exec"
- "regexp"
- "runtime"
- "strconv"
- "strings"
-
- "github.com/google/uuid"
-)
-
-type command struct {
- Command string
- Stdin io.Reader
- Stdout io.Writer
-}
-
-func (c *command) Run(arg ...string) ([][]string, error) {
-
- cmd := exec.Command(c.Command, arg...)
-
- var stdout, stderr bytes.Buffer
-
- if c.Stdout == nil {
- cmd.Stdout = &stdout
- } else {
- cmd.Stdout = c.Stdout
- }
-
- if c.Stdin != nil {
- cmd.Stdin = c.Stdin
-
- }
- cmd.Stderr = &stderr
-
- id := uuid.New().String()
- joinedArgs := strings.Join(cmd.Args, " ")
-
- logger.Log([]string{"ID:" + id, "START", joinedArgs})
- err := cmd.Run()
- logger.Log([]string{"ID:" + id, "FINISH"})
-
- if err != nil {
- return nil, &Error{
- Err: err,
- Debug: strings.Join([]string{cmd.Path, joinedArgs[1:]}, " "),
- Stderr: stderr.String(),
- }
- }
-
- // assume if you passed in something for stdout, that you know what to do with it
- if c.Stdout != nil {
- return nil, nil
- }
-
- lines := strings.Split(stdout.String(), "\n")
-
- //last line is always blank
- lines = lines[0 : len(lines)-1]
- output := make([][]string, len(lines))
-
- for i, l := range lines {
- output[i] = strings.Fields(l)
- }
-
- return output, nil
-}
-
-func setString(field *string, value string) {
- v := ""
- if value != "-" {
- v = value
- }
- *field = v
-}
-
-func setUint(field *uint64, value string) error {
- var v uint64
- if value != "-" {
- var err error
- v, err = strconv.ParseUint(value, 10, 64)
- if err != nil {
- return err
- }
- }
- *field = v
- return nil
-}
-
-func (ds *Dataset) parseLine(line []string) error {
- var err error
-
- if len(line) != len(dsPropList) {
- return errors.New("Output does not match what is expected on this platform")
- }
- setString(&ds.Name, line[0])
- setString(&ds.Origin, line[1])
-
- if err = setUint(&ds.Used, line[2]); err != nil {
- return err
- }
- if err = setUint(&ds.Avail, line[3]); err != nil {
- return err
- }
-
- setString(&ds.Mountpoint, line[4])
- setString(&ds.Compression, line[5])
- setString(&ds.Type, line[6])
-
- if err = setUint(&ds.Volsize, line[7]); err != nil {
- return err
- }
- if err = setUint(&ds.Quota, line[8]); err != nil {
- return err
- }
- if err = setUint(&ds.Referenced, line[9]); err != nil {
- return err
- }
-
- if runtime.GOOS == "solaris" {
- return nil
- }
-
- if err = setUint(&ds.Written, line[10]); err != nil {
- return err
- }
- if err = setUint(&ds.Logicalused, line[11]); err != nil {
- return err
- }
- if err = setUint(&ds.Usedbydataset, line[12]); err != nil {
- return err
- }
-
- return nil
-}
-
-/*
- * from zfs diff`s escape function:
- *
- * Prints a file name out a character at a time. If the character is
- * not in the range of what we consider "printable" ASCII, display it
- * as an escaped 3-digit octal value. ASCII values less than a space
- * are all control characters and we declare the upper end as the
- * DELete character. This also is the last 7-bit ASCII character.
- * We choose to treat all 8-bit ASCII as not printable for this
- * application.
- */
-func unescapeFilepath(path string) (string, error) {
- buf := make([]byte, 0, len(path))
- llen := len(path)
- for i := 0; i < llen; {
- if path[i] == '\\' {
- if llen < i+4 {
- return "", fmt.Errorf("Invalid octal code: too short")
- }
- octalCode := path[(i + 1):(i + 4)]
- val, err := strconv.ParseUint(octalCode, 8, 8)
- if err != nil {
- return "", fmt.Errorf("Invalid octal code: %v", err)
- }
- buf = append(buf, byte(val))
- i += 4
- } else {
- buf = append(buf, path[i])
- i++
- }
- }
- return string(buf), nil
-}
-
-var changeTypeMap = map[string]ChangeType{
- "-": Removed,
- "+": Created,
- "M": Modified,
- "R": Renamed,
-}
-var inodeTypeMap = map[string]InodeType{
- "B": BlockDevice,
- "C": CharacterDevice,
- "/": Directory,
- ">": Door,
- "|": NamedPipe,
- "@": SymbolicLink,
- "P": EventPort,
- "=": Socket,
- "F": File,
-}
-
-// matches (+1) or (-1)
-var referenceCountRegex = regexp.MustCompile("\\(([+-]\\d+?)\\)")
-
-func parseReferenceCount(field string) (int, error) {
- matches := referenceCountRegex.FindStringSubmatch(field)
- if matches == nil {
- return 0, fmt.Errorf("Regexp does not match")
- }
- return strconv.Atoi(matches[1])
-}
-
-func parseInodeChange(line []string) (*InodeChange, error) {
- llen := len(line)
- if llen < 1 {
- return nil, fmt.Errorf("Empty line passed")
- }
-
- changeType := changeTypeMap[line[0]]
- if changeType == 0 {
- return nil, fmt.Errorf("Unknown change type '%s'", line[0])
- }
-
- switch changeType {
- case Renamed:
- if llen != 4 {
- return nil, fmt.Errorf("Mismatching number of fields: expect 4, got: %d", llen)
- }
- case Modified:
- if llen != 4 && llen != 3 {
- return nil, fmt.Errorf("Mismatching number of fields: expect 3..4, got: %d", llen)
- }
- default:
- if llen != 3 {
- return nil, fmt.Errorf("Mismatching number of fields: expect 3, got: %d", llen)
- }
- }
-
- inodeType := inodeTypeMap[line[1]]
- if inodeType == 0 {
- return nil, fmt.Errorf("Unknown inode type '%s'", line[1])
- }
-
- path, err := unescapeFilepath(line[2])
- if err != nil {
- return nil, fmt.Errorf("Failed to parse filename: %v", err)
- }
-
- var newPath string
- var referenceCount int
- switch changeType {
- case Renamed:
- newPath, err = unescapeFilepath(line[3])
- if err != nil {
- return nil, fmt.Errorf("Failed to parse filename: %v", err)
- }
- case Modified:
- if llen == 4 {
- referenceCount, err = parseReferenceCount(line[3])
- if err != nil {
- return nil, fmt.Errorf("Failed to parse reference count: %v", err)
- }
- }
- default:
- newPath = ""
- }
-
- return &InodeChange{
- Change: changeType,
- Type: inodeType,
- Path: path,
- NewPath: newPath,
- ReferenceCountChange: referenceCount,
- }, nil
-}
-
-// example input
-//M / /testpool/bar/
-//+ F /testpool/bar/hello.txt
-//M / /testpool/bar/hello.txt (+1)
-//M / /testpool/bar/hello-hardlink
-func parseInodeChanges(lines [][]string) ([]*InodeChange, error) {
- changes := make([]*InodeChange, len(lines))
-
- for i, line := range lines {
- c, err := parseInodeChange(line)
- if err != nil {
- return nil, fmt.Errorf("Failed to parse line %d of zfs diff: %v, got: '%s'", i, err, line)
- }
- changes[i] = c
- }
- return changes, nil
-}
-
-func listByType(t, filter string) ([]*Dataset, error) {
- args := []string{"list", "-rHp", "-t", t, "-o", dsPropListOptions}
-
- if filter != "" {
- args = append(args, filter)
- }
- out, err := zfs(args...)
- if err != nil {
- return nil, err
- }
-
- var datasets []*Dataset
-
- name := ""
- var ds *Dataset
- for _, line := range out {
- if name != line[0] {
- name = line[0]
- ds = &Dataset{Name: name}
- datasets = append(datasets, ds)
- }
- if err := ds.parseLine(line); err != nil {
- return nil, err
- }
- }
-
- return datasets, nil
-}
-
-func propsSlice(properties map[string]string) []string {
- args := make([]string, 0, len(properties)*3)
- for k, v := range properties {
- args = append(args, "-o")
- args = append(args, fmt.Sprintf("%s=%s", k, v))
- }
- return args
-}
-
-func (z *Zpool) parseLine(line []string) error {
- prop := line[1]
- val := line[2]
-
- var err error
-
- switch prop {
- case "name":
- setString(&z.Name, val)
- case "health":
- setString(&z.Health, val)
- case "allocated":
- err = setUint(&z.Allocated, val)
- case "size":
- err = setUint(&z.Size, val)
- case "free":
- err = setUint(&z.Free, val)
- case "fragmentation":
- // Trim trailing "%" before parsing uint
- i := strings.Index(val, "%")
- if i < 0 {
- i = len(val)
- }
- err = setUint(&z.Fragmentation, val[:i])
- case "readonly":
- z.ReadOnly = val == "on"
- case "freeing":
- err = setUint(&z.Freeing, val)
- case "leaked":
- err = setUint(&z.Leaked, val)
- case "dedupratio":
- // Trim trailing "x" before parsing float64
- z.DedupRatio, err = strconv.ParseFloat(val[:len(val)-1], 64)
- }
- return err
-}
+++ /dev/null
-// +build !solaris
-
-package zfs
-
-import (
- "strings"
-)
-
-// List of ZFS properties to retrieve from zfs list command on a non-Solaris platform
-var dsPropList = []string{"name", "origin", "used", "available", "mountpoint", "compression", "type", "volsize", "quota", "referenced", "written", "logicalused", "usedbydataset"}
-
-var dsPropListOptions = strings.Join(dsPropList, ",")
-
-// List of Zpool properties to retrieve from zpool list command on a non-Solaris platform
-var zpoolPropList = []string{"name", "health", "allocated", "size", "free", "readonly", "dedupratio", "fragmentation", "freeing", "leaked"}
-var zpoolPropListOptions = strings.Join(zpoolPropList, ",")
-var zpoolArgs = []string{"get", "-p", zpoolPropListOptions}
+++ /dev/null
-// +build solaris
-
-package zfs
-
-import (
- "strings"
-)
-
-// List of ZFS properties to retrieve from zfs list command on a Solaris platform
-var dsPropList = []string{"name", "origin", "used", "available", "mountpoint", "compression", "type", "volsize", "quota", "referenced"}
-
-var dsPropListOptions = strings.Join(dsPropList, ",")
-
-// List of Zpool properties to retrieve from zpool list command on a non-Solaris platform
-var zpoolPropList = []string{"name", "health", "allocated", "size", "free", "readonly", "dedupratio"}
-var zpoolPropListOptions = strings.Join(zpoolPropList, ",")
-var zpoolArgs = []string{"get", "-p", zpoolPropListOptions}
+++ /dev/null
-// Package zfs provides wrappers around the ZFS command line tools.
-package zfs
-
-import (
- "errors"
- "fmt"
- "io"
- "strconv"
- "strings"
-)
-
-// ZFS dataset types, which can indicate if a dataset is a filesystem,
-// snapshot, or volume.
-const (
- DatasetFilesystem = "filesystem"
- DatasetSnapshot = "snapshot"
- DatasetVolume = "volume"
-)
-
-// Dataset is a ZFS dataset. A dataset could be a clone, filesystem, snapshot,
-// or volume. The Type struct member can be used to determine a dataset's type.
-//
-// The field definitions can be found in the ZFS manual:
-// http://www.freebsd.org/cgi/man.cgi?zfs(8).
-type Dataset struct {
- Name string
- Origin string
- Used uint64
- Avail uint64
- Mountpoint string
- Compression string
- Type string
- Written uint64
- Volsize uint64
- Logicalused uint64
- Usedbydataset uint64
- Quota uint64
- Referenced uint64
-}
-
-// InodeType is the type of inode as reported by Diff
-type InodeType int
-
-// Types of Inodes
-const (
- _ = iota // 0 == unknown type
- BlockDevice InodeType = iota
- CharacterDevice
- Directory
- Door
- NamedPipe
- SymbolicLink
- EventPort
- Socket
- File
-)
-
-// ChangeType is the type of inode change as reported by Diff
-type ChangeType int
-
-// Types of Changes
-const (
- _ = iota // 0 == unknown type
- Removed ChangeType = iota
- Created
- Modified
- Renamed
-)
-
-// DestroyFlag is the options flag passed to Destroy
-type DestroyFlag int
-
-// Valid destroy options
-const (
- DestroyDefault DestroyFlag = 1 << iota
- DestroyRecursive = 1 << iota
- DestroyRecursiveClones = 1 << iota
- DestroyDeferDeletion = 1 << iota
- DestroyForceUmount = 1 << iota
-)
-
-// InodeChange represents a change as reported by Diff
-type InodeChange struct {
- Change ChangeType
- Type InodeType
- Path string
- NewPath string
- ReferenceCountChange int
-}
-
-// Logger can be used to log commands/actions
-type Logger interface {
- Log(cmd []string)
-}
-
-type defaultLogger struct{}
-
-func (*defaultLogger) Log(cmd []string) {
- return
-}
-
-var logger Logger = &defaultLogger{}
-
-// SetLogger set a log handler to log all commands including arguments before
-// they are executed
-func SetLogger(l Logger) {
- if l != nil {
- logger = l
- }
-}
-
-// zfs is a helper function to wrap typical calls to zfs.
-func zfs(arg ...string) ([][]string, error) {
- c := command{Command: "zfs"}
- return c.Run(arg...)
-}
-
-// Datasets returns a slice of ZFS datasets, regardless of type.
-// A filter argument may be passed to select a dataset with the matching name,
-// or empty string ("") may be used to select all datasets.
-func Datasets(filter string) ([]*Dataset, error) {
- return listByType("all", filter)
-}
-
-// Snapshots returns a slice of ZFS snapshots.
-// A filter argument may be passed to select a snapshot with the matching name,
-// or empty string ("") may be used to select all snapshots.
-func Snapshots(filter string) ([]*Dataset, error) {
- return listByType(DatasetSnapshot, filter)
-}
-
-// Filesystems returns a slice of ZFS filesystems.
-// A filter argument may be passed to select a filesystem with the matching name,
-// or empty string ("") may be used to select all filesystems.
-func Filesystems(filter string) ([]*Dataset, error) {
- return listByType(DatasetFilesystem, filter)
-}
-
-// Volumes returns a slice of ZFS volumes.
-// A filter argument may be passed to select a volume with the matching name,
-// or empty string ("") may be used to select all volumes.
-func Volumes(filter string) ([]*Dataset, error) {
- return listByType(DatasetVolume, filter)
-}
-
-// GetDataset retrieves a single ZFS dataset by name. This dataset could be
-// any valid ZFS dataset type, such as a clone, filesystem, snapshot, or volume.
-func GetDataset(name string) (*Dataset, error) {
- out, err := zfs("list", "-Hp", "-o", dsPropListOptions, name)
- if err != nil {
- return nil, err
- }
-
- ds := &Dataset{Name: name}
- for _, line := range out {
- if err := ds.parseLine(line); err != nil {
- return nil, err
- }
- }
-
- return ds, nil
-}
-
-// Clone clones a ZFS snapshot and returns a clone dataset.
-// An error will be returned if the input dataset is not of snapshot type.
-func (d *Dataset) Clone(dest string, properties map[string]string) (*Dataset, error) {
- if d.Type != DatasetSnapshot {
- return nil, errors.New("can only clone snapshots")
- }
- args := make([]string, 2, 4)
- args[0] = "clone"
- args[1] = "-p"
- if properties != nil {
- args = append(args, propsSlice(properties)...)
- }
- args = append(args, []string{d.Name, dest}...)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(dest)
-}
-
-// Unmount unmounts currently mounted ZFS file systems.
-func (d *Dataset) Unmount(force bool) (*Dataset, error) {
- if d.Type == DatasetSnapshot {
- return nil, errors.New("cannot unmount snapshots")
- }
- args := make([]string, 1, 3)
- args[0] = "umount"
- if force {
- args = append(args, "-f")
- }
- args = append(args, d.Name)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(d.Name)
-}
-
-// Mount mounts ZFS file systems.
-func (d *Dataset) Mount(overlay bool, options []string) (*Dataset, error) {
- if d.Type == DatasetSnapshot {
- return nil, errors.New("cannot mount snapshots")
- }
- args := make([]string, 1, 5)
- args[0] = "mount"
- if overlay {
- args = append(args, "-O")
- }
- if options != nil {
- args = append(args, "-o")
- args = append(args, strings.Join(options, ","))
- }
- args = append(args, d.Name)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(d.Name)
-}
-
-// ReceiveSnapshot receives a ZFS stream from the input io.Reader, creates a
-// new snapshot with the specified name, and streams the input data into the
-// newly-created snapshot.
-func ReceiveSnapshot(input io.Reader, name string) (*Dataset, error) {
- c := command{Command: "zfs", Stdin: input}
- _, err := c.Run("receive", name)
- if err != nil {
- return nil, err
- }
- return GetDataset(name)
-}
-
-// SendSnapshot sends a ZFS stream of a snapshot to the input io.Writer.
-// An error will be returned if the input dataset is not of snapshot type.
-func (d *Dataset) SendSnapshot(output io.Writer) error {
- if d.Type != DatasetSnapshot {
- return errors.New("can only send snapshots")
- }
-
- c := command{Command: "zfs", Stdout: output}
- _, err := c.Run("send", d.Name)
- return err
-}
-
-// CreateVolume creates a new ZFS volume with the specified name, size, and
-// properties.
-// A full list of available ZFS properties may be found here:
-// https://www.freebsd.org/cgi/man.cgi?zfs(8).
-func CreateVolume(name string, size uint64, properties map[string]string) (*Dataset, error) {
- args := make([]string, 4, 5)
- args[0] = "create"
- args[1] = "-p"
- args[2] = "-V"
- args[3] = strconv.FormatUint(size, 10)
- if properties != nil {
- args = append(args, propsSlice(properties)...)
- }
- args = append(args, name)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(name)
-}
-
-// Destroy destroys a ZFS dataset. If the destroy bit flag is set, any
-// descendents of the dataset will be recursively destroyed, including snapshots.
-// If the deferred bit flag is set, the snapshot is marked for deferred
-// deletion.
-func (d *Dataset) Destroy(flags DestroyFlag) error {
- args := make([]string, 1, 3)
- args[0] = "destroy"
- if flags&DestroyRecursive != 0 {
- args = append(args, "-r")
- }
-
- if flags&DestroyRecursiveClones != 0 {
- args = append(args, "-R")
- }
-
- if flags&DestroyDeferDeletion != 0 {
- args = append(args, "-d")
- }
-
- if flags&DestroyForceUmount != 0 {
- args = append(args, "-f")
- }
-
- args = append(args, d.Name)
- _, err := zfs(args...)
- return err
-}
-
-// SetProperty sets a ZFS property on the receiving dataset.
-// A full list of available ZFS properties may be found here:
-// https://www.freebsd.org/cgi/man.cgi?zfs(8).
-func (d *Dataset) SetProperty(key, val string) error {
- prop := strings.Join([]string{key, val}, "=")
- _, err := zfs("set", prop, d.Name)
- return err
-}
-
-// GetProperty returns the current value of a ZFS property from the
-// receiving dataset.
-// A full list of available ZFS properties may be found here:
-// https://www.freebsd.org/cgi/man.cgi?zfs(8).
-func (d *Dataset) GetProperty(key string) (string, error) {
- out, err := zfs("get", "-H", key, d.Name)
- if err != nil {
- return "", err
- }
-
- return out[0][2], nil
-}
-
-// Rename renames a dataset.
-func (d *Dataset) Rename(name string, createParent bool, recursiveRenameSnapshots bool) (*Dataset, error) {
- args := make([]string, 3, 5)
- args[0] = "rename"
- args[1] = d.Name
- args[2] = name
- if createParent {
- args = append(args, "-p")
- }
- if recursiveRenameSnapshots {
- args = append(args, "-r")
- }
- _, err := zfs(args...)
- if err != nil {
- return d, err
- }
-
- return GetDataset(name)
-}
-
-// Snapshots returns a slice of all ZFS snapshots of a given dataset.
-func (d *Dataset) Snapshots() ([]*Dataset, error) {
- return Snapshots(d.Name)
-}
-
-// CreateFilesystem creates a new ZFS filesystem with the specified name and
-// properties.
-// A full list of available ZFS properties may be found here:
-// https://www.freebsd.org/cgi/man.cgi?zfs(8).
-func CreateFilesystem(name string, properties map[string]string) (*Dataset, error) {
- args := make([]string, 1, 4)
- args[0] = "create"
-
- if properties != nil {
- args = append(args, propsSlice(properties)...)
- }
-
- args = append(args, name)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(name)
-}
-
-// Snapshot creates a new ZFS snapshot of the receiving dataset, using the
-// specified name. Optionally, the snapshot can be taken recursively, creating
-// snapshots of all descendent filesystems in a single, atomic operation.
-func (d *Dataset) Snapshot(name string, recursive bool) (*Dataset, error) {
- args := make([]string, 1, 4)
- args[0] = "snapshot"
- if recursive {
- args = append(args, "-r")
- }
- snapName := fmt.Sprintf("%s@%s", d.Name, name)
- args = append(args, snapName)
- _, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- return GetDataset(snapName)
-}
-
-// Rollback rolls back the receiving ZFS dataset to a previous snapshot.
-// Optionally, intermediate snapshots can be destroyed. A ZFS snapshot
-// rollback cannot be completed without this option, if more recent
-// snapshots exist.
-// An error will be returned if the input dataset is not of snapshot type.
-func (d *Dataset) Rollback(destroyMoreRecent bool) error {
- if d.Type != DatasetSnapshot {
- return errors.New("can only rollback snapshots")
- }
-
- args := make([]string, 1, 3)
- args[0] = "rollback"
- if destroyMoreRecent {
- args = append(args, "-r")
- }
- args = append(args, d.Name)
-
- _, err := zfs(args...)
- return err
-}
-
-// Children returns a slice of children of the receiving ZFS dataset.
-// A recursion depth may be specified, or a depth of 0 allows unlimited
-// recursion.
-func (d *Dataset) Children(depth uint64) ([]*Dataset, error) {
- args := []string{"list"}
- if depth > 0 {
- args = append(args, "-d")
- args = append(args, strconv.FormatUint(depth, 10))
- } else {
- args = append(args, "-r")
- }
- args = append(args, "-t", "all", "-Hp", "-o", dsPropListOptions)
- args = append(args, d.Name)
-
- out, err := zfs(args...)
- if err != nil {
- return nil, err
- }
-
- var datasets []*Dataset
- name := ""
- var ds *Dataset
- for _, line := range out {
- if name != line[0] {
- name = line[0]
- ds = &Dataset{Name: name}
- datasets = append(datasets, ds)
- }
- if err := ds.parseLine(line); err != nil {
- return nil, err
- }
- }
- return datasets[1:], nil
-}
-
-// Diff returns changes between a snapshot and the given ZFS dataset.
-// The snapshot name must include the filesystem part as it is possible to
-// compare clones with their origin snapshots.
-func (d *Dataset) Diff(snapshot string) ([]*InodeChange, error) {
- args := []string{"diff", "-FH", snapshot, d.Name}[:]
- out, err := zfs(args...)
- if err != nil {
- return nil, err
- }
- inodeChanges, err := parseInodeChanges(out)
- if err != nil {
- return nil, err
- }
- return inodeChanges, nil
-}
+++ /dev/null
-package zfs
-
-// ZFS zpool states, which can indicate if a pool is online, offline,
-// degraded, etc. More information regarding zpool states can be found here:
-// https://docs.oracle.com/cd/E19253-01/819-5461/gamno/index.html.
-const (
- ZpoolOnline = "ONLINE"
- ZpoolDegraded = "DEGRADED"
- ZpoolFaulted = "FAULTED"
- ZpoolOffline = "OFFLINE"
- ZpoolUnavail = "UNAVAIL"
- ZpoolRemoved = "REMOVED"
-)
-
-// Zpool is a ZFS zpool. A pool is a top-level structure in ZFS, and can
-// contain many descendent datasets.
-type Zpool struct {
- Name string
- Health string
- Allocated uint64
- Size uint64
- Free uint64
- Fragmentation uint64
- ReadOnly bool
- Freeing uint64
- Leaked uint64
- DedupRatio float64
-}
-
-// zpool is a helper function to wrap typical calls to zpool.
-func zpool(arg ...string) ([][]string, error) {
- c := command{Command: "zpool"}
- return c.Run(arg...)
-}
-
-// GetZpool retrieves a single ZFS zpool by name.
-func GetZpool(name string) (*Zpool, error) {
- args := zpoolArgs
- args = append(args, name)
- out, err := zpool(args...)
- if err != nil {
- return nil, err
- }
-
- // there is no -H
- out = out[1:]
-
- z := &Zpool{Name: name}
- for _, line := range out {
- if err := z.parseLine(line); err != nil {
- return nil, err
- }
- }
-
- return z, nil
-}
-
-// Datasets returns a slice of all ZFS datasets in a zpool.
-func (z *Zpool) Datasets() ([]*Dataset, error) {
- return Datasets(z.Name)
-}
-
-// Snapshots returns a slice of all ZFS snapshots in a zpool.
-func (z *Zpool) Snapshots() ([]*Dataset, error) {
- return Snapshots(z.Name)
-}
-
-// CreateZpool creates a new ZFS zpool with the specified name, properties,
-// and optional arguments.
-// A full list of available ZFS properties and command-line arguments may be
-// found here: https://www.freebsd.org/cgi/man.cgi?zfs(8).
-func CreateZpool(name string, properties map[string]string, args ...string) (*Zpool, error) {
- cli := make([]string, 1, 4)
- cli[0] = "create"
- if properties != nil {
- cli = append(cli, propsSlice(properties)...)
- }
- cli = append(cli, name)
- cli = append(cli, args...)
- _, err := zpool(cli...)
- if err != nil {
- return nil, err
- }
-
- return &Zpool{Name: name}, nil
-}
-
-// Destroy destroys a ZFS zpool by name.
-func (z *Zpool) Destroy() error {
- _, err := zpool("destroy", z.Name)
- return err
-}
-
-// ListZpools list all ZFS zpools accessible on the current system.
-func ListZpools() ([]*Zpool, error) {
- args := []string{"list", "-Ho", "name"}
- out, err := zpool(args...)
- if err != nil {
- return nil, err
- }
-
- var pools []*Zpool
-
- for _, line := range out {
- z, err := GetZpool(line[0])
- if err != nil {
- return nil, err
- }
- pools = append(pools, z)
- }
- return pools, nil
-}
# github.com/google/btree v1.1.3
## explicit; go 1.18
github.com/google/btree
-# github.com/google/cadvisor v0.53.0
+# github.com/google/cadvisor v0.55.1
## explicit; go 1.23.0
github.com/google/cadvisor/cache/memory
github.com/google/cadvisor/client/v2
github.com/google/cadvisor/devicemapper
github.com/google/cadvisor/events
github.com/google/cadvisor/fs
+github.com/google/cadvisor/fs/btrfs
+github.com/google/cadvisor/fs/btrfs/install
+github.com/google/cadvisor/fs/devicemapper
+github.com/google/cadvisor/fs/devicemapper/install
+github.com/google/cadvisor/fs/nfs
+github.com/google/cadvisor/fs/nfs/install
+github.com/google/cadvisor/fs/overlay
+github.com/google/cadvisor/fs/overlay/install
+github.com/google/cadvisor/fs/tmpfs
+github.com/google/cadvisor/fs/tmpfs/install
+github.com/google/cadvisor/fs/vfs
+github.com/google/cadvisor/fs/vfs/install
github.com/google/cadvisor/info/v1
github.com/google/cadvisor/info/v2
github.com/google/cadvisor/machine
# github.com/json-iterator/go v1.1.12
## explicit; go 1.12
github.com/json-iterator/go
-# github.com/karrick/godirwalk v1.17.0
-## explicit; go 1.13
-github.com/karrick/godirwalk
# github.com/kr/pretty v0.3.1
## explicit; go 1.12
# github.com/kr/text v0.2.0
github.com/mailru/easyjson/jwriter
# github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible
## explicit
-github.com/mistifyio/go-zfs
# github.com/mitchellh/go-wordwrap v1.0.1
## explicit; go 1.14
github.com/mitchellh/go-wordwrap