Rust语言

关注公众号 jb51net

关闭
首页 > 软件编程 > Rust语言 > Rust缓存

Rust在写库时实现缓存的操作方法

作者:Star-tears

Moka是一个用于Rust的高性能缓存库,它提供了多种类型的缓存数据结构,包括哈希表、LRU(最近最少使用)缓存和 支持TTL(生存时间)缓存,这篇文章给大家介绍Rust在写库时实现缓存的相关知识,感兴趣的朋友一起看看吧

Rust在写库时实现缓存

依赖

在写库时,实现一个缓存请求,需要用到全局变量,所以我们可以添加cratelazy_static

Cargo.toml添加以下依赖

[dependencies]
chrono = "0.4.31"
lazy_static = "1.4.0"
reqwest = { version = "0.11.23", features = ["blocking", "json"] }
serde = { version = "1.0.193", features = ["derive"] }
serde_json = "1.0.108"

代码实现

use std::{sync::Mutex, collections::HashMap};
use chrono::{DateTime, Utc};
use lazy_static::lazy_static;
use serde_json::Value;
lazy_static! {
    static ref REQUESTS_RESPONSE_CACHE: Mutex<HashMap<String, RequestsResponseCache>> =
        Mutex::new(HashMap::new());
}
pub struct RequestsResponseCache {
    pub response: Value,
    pub datetime: DateTime<Utc>,
}
pub fn get_requests_response_cache(url: &str) -> Result<Value, reqwest::Error> {
    let mut cache = REQUESTS_RESPONSE_CACHE.lock().unwrap();
    if let Some(cache_entry) = cache.get(url) {
        let elapsed = Utc::now() - cache_entry.datetime;
        if elapsed.num_seconds() > 3600 {
            let response: Value = reqwest::blocking::get(url)?.json()?;
            let res = response.clone();
            let cache_entry = RequestsResponseCache {
                response,
                datetime: Utc::now(),
            };
            cache.insert(url.to_string(), cache_entry);
            return Ok(res);
        }
    }
    let response: Value = reqwest::blocking::get(url)?.json()?;
    let res = response.clone();
    let cache_entry = RequestsResponseCache {
        response,
        datetime: Utc::now(),
    };
    cache.insert(url.to_string(), cache_entry);
    Ok(res)
}

使用了 lazy_static 宏创建了一个静态的全局变量 REQUESTS_RESPONSE_CACHE,这个全局变量是一个 Mutex 包裹的 HashMap,用来存储请求的响应缓存。这个缓存是线程安全的,因为被 Mutex 包裹了起来,这样就可以在多个线程中安全地访问和修改这个缓存。

接着定义了一个 RequestsResponseCache 结构体,用来表示缓存中的一个条目,其中包含了响应数据 response 和缓存的时间戳 datetime

然后定义了一个 get_requests_response_cache 函数,用来从缓存中获取请求的响应。它首先尝试从缓存中获取指定 url 的响应数据,如果缓存中有对应的条目,并且距离上次缓存的时间超过了 3600 秒(1 小时),则重新发起请求并更新缓存,然后返回响应数据。如果缓存中没有对应的条目,或者缓存的时间未超过 3600 秒,则直接发起请求并更新缓存,然后返回响应数据。

这样就提供了一个简单的请求响应的缓存功能,能够在需要时缓存请求的响应数据,并在一定时间内有效,从而减少对远程服务的重复请求,提高程序性能。

补充:

rust缓存库moka简介

关于moka

“Moka” 是一个用于 Rust 的高性能缓存库,它提供了多种类型的缓存数据结构,包括哈希表、LRU(最近最少使用)缓存和 支持TTL(生存时间)缓存。
以下是一些 “moka” 库的特点和功能:

moka的github地址:moka

moka的使用示例

1.事件通知:
支持在缓存项发生过期淘汰、用户主动淘汰、缓存池大小受限强制淘汰时,触发回调函数执行一些后续任务。

use moka::{notification::RemovalCause, sync::Cache};
use std::time::{Duration,Instant};
fn main() {
    // 创建一个缓存项事件监听闭包
    let now = Instant::now();
    let listener = move |k, v: String, cause| {
        // 监听缓存项的触发事件,RemovalCause包含四种场景:Expired(缓存项过期)、Explicit(用户主动移除缓存)、Replaced(缓存项发生更新或替换)、Size(缓存数量达到上限驱逐)。
        println!(
            "== An entry has been evicted. time:{} k: {:?}, v: {:?},cause:{:?}",
            now.elapsed().as_secs(),
            k,
            v,
            cause
        );
        // 针对不同事项,进行处理。
        // match cause {
        //     RemovalCause::Expired => {}
        //     RemovalCause::Explicit => {}
        //     RemovalCause::Replaced => {}
        //     RemovalCause::Size => {}
        // }
    };
    //缓存生存时间:10s
    let ttl_time = Duration::from_secs(10);
    // 创建一个具有过期时间和淘汰机制的缓存
    let cache: Cache<String, String> = Cache::builder()
        .time_to_idle(ttl_time)
        .eviction_listener(listener)
        .build();
    // insert 缓存项
    cache.insert("key1".to_string(), "value1".to_string());
    cache.insert("key2".to_string(), "value2".to_string());
    cache.insert("key3".to_string(), "value3".to_string());
    // 5s后使用key1
    std::thread::sleep(Duration::from_secs(5));
    if let Some(value) = cache.get(&"key1".to_string()) {
        println!("5s: Value of key1: {}", value);
    }
    cache.remove("key3");
    println!("5s: remove key3");
    // 等待 6 秒,让缓存项key2过期
    std::thread::sleep(Duration::from_secs(6));
    // 尝试获取缓存项 "key1" 的值
    if let Some(value) = cache.get("key1") {
        println!("11s: Value of key1: {}", value);
    } else {
        println!("Key1 has expired.");
    }
    // 尝试获取缓存项 "key2" 的值
    if let Some(value) = cache.get("key2") {
        println!("11s: Value of key2: {}", value);
    } else {
        println!("Key2 has expired.");
    }
    // 尝试获取缓存项 "key3" 的值
    if let Some(value) = cache.get("key3") {
        println!("11s: Value of key3: {}", value);
    } else {
        println!("Key3 has removed.");
    }
    // 空置9s后
    std::thread::sleep(Duration::from_secs(11));
    // 再次尝试获取缓存项 "key1" 的值
    if let Some(value) = cache.get("key1") {
        println!("22s: Value of key1: {}", value);
    } else {
        println!("Key1 has expired.");
    }
}

运行结果:

5s: Value of key1: value1
== An entry has been evicted. time:5 k: "key3", v: "value3",cause:Explicit
5s: remove key3
== An entry has been evicted. time:10 k: "key2", v: "value2",cause:Expired
11s: Value of key1: value1
Key2 has expired.
Key3 has removed.
== An entry has been evicted. time:21 k: "key1", v: "value1",cause:Expired
Key1 has expired.

2.支持同步并发:

use moka::sync::Cache;
use std::thread;
fn value(n: usize) -> String {
    format!("value {}", n)
}
fn main() {
    const NUM_THREADS: usize = 3;
    const NUM_KEYS_PER_THREAD: usize = 2;
    // Create a cache that can store up to 6 entries.
    let cache = Cache::new(6);
    // Spawn threads and read and update the cache simultaneously.
    let threads: Vec<_> = (0..NUM_THREADS)
        .map(|i| {
            // To share the same cache across the threads, clone it.
            // This is a cheap operation.
            let my_cache = cache.clone();
            let start = i * NUM_KEYS_PER_THREAD;
            let end = (i + 1) * NUM_KEYS_PER_THREAD;
            thread::spawn(move || {
                // Insert 2 entries. (NUM_KEYS_PER_THREAD = 2)
                for key in start..end {
                    my_cache.insert(key, value(key));
                    println!("{}",my_cache.get(&key).unwrap());
                }
                // Invalidate every 2 element of the inserted entries.
                for key in (start..end).step_by(2) {
                    my_cache.invalidate(&key);
                }
            })
        })
        .collect();
    // Wait for all threads to complete.
    threads.into_iter().for_each(|t| t.join().expect("Failed"));
    // Verify the result.
    for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) {
        if key % 2 == 0 {
            assert_eq!(cache.get(&key), None);
        } else {
            assert_eq!(cache.get(&key), Some(value(key)));
        }
    }
}

结果:

value 2
value 3
value 0
value 4
value 1
value 5

并发读写cahce中的数据不会产生异常。

3.下面是moka库example中给出的异步示例:

use moka::future::Cache;
#[tokio::main]
async fn main() {
    const NUM_TASKS: usize = 16;
    const NUM_KEYS_PER_TASK: usize = 64;
    fn value(n: usize) -> String {
        format!("value {}", n)
    }
    // Create a cache that can store up to 10,000 entries.
    let cache = Cache::new(10_000);
    // Spawn async tasks and write to and read from the cache.
    let tasks: Vec<_> = (0..NUM_TASKS)
        .map(|i| {
            // To share the same cache across the async tasks, clone it.
            // This is a cheap operation.
            let my_cache = cache.clone();
            let start = i * NUM_KEYS_PER_TASK;
            let end = (i + 1) * NUM_KEYS_PER_TASK;
            tokio::spawn(async move {
                // Insert 64 entries. (NUM_KEYS_PER_TASK = 64)
                for key in start..end {
                    // insert() is an async method, so await it.
                    my_cache.insert(key, value(key)).await;
                    // get() returns Option<String>, a clone of the stored value.
                    assert_eq!(my_cache.get(&key), Some(value(key)));
                }
                // Invalidate every 4 element of the inserted entries.
                for key in (start..end).step_by(4) {
                    // invalidate() is an async method, so await it.
                    my_cache.invalidate(&key).await;
                }
            })
        })
        .collect();
    // Wait for all tasks to complete.
    futures_util::future::join_all(tasks).await;
    // Verify the result.
    for key in 0..(NUM_TASKS * NUM_KEYS_PER_TASK) {
        if key % 4 == 0 {
            assert_eq!(cache.get(&key), None);
        } else {
            assert_eq!(cache.get(&key), Some(value(key)));
        }
    }
}

到此这篇关于Rust在写库时实现缓存的文章就介绍到这了,更多相关Rust缓存内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

阅读全文