whisper-cpp-log: allows hooking into whisper. GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ ggml - Tensor library for machine learning alpaca. The models were trained on either English-only data or multilingual data. cpp development by creating an account on GitHub. 8% WER over long-form audio. だいたい GPU の 10 倍くらい時間 Upstream whisper. 3× slower). cpp is a custom inference implementation of the Whisper model. b6344df. to get started. cpp 在 Windows 上的实现,并增加了显卡的支持,使得速度大幅提升。 Apr 6, 2024 · opencl: enable OpenCL support. The large-v3 model is the one used in this article (source: openai/whisper-large-v3). Requires calling. anandijain pushed a commit to anandijain/whisper. RTX 3090. Clone the Repository. Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. CPUとGPUの両方で8 Jan 27, 2024 · whisper. Mar 21, 2024 · Whisper. share/whisper-cppのディレクトリをexportすればOK ※brewのformulaを読むとggml-metal. Upstream whisper. medium 769 M medium. BLAS CPU support via OpenBLAS Collaborate on models, datasets and Spaces. 👍 2. In this article, you will find the following: Whisper benchmarks and results opencl: enable OpenCL support. It runs slightly slower than Whisper on GPU for the small, medium and large models (1. Whisper. cpp Distil-Whisper can be run with the Whisper. Get a Mac-native version of Buzz with a cleaner look, audio playback, drag-and-drop import, transcript editing, search, and much more. Transcribe an audio file, alternatively specifying language, model, and device. is_available() is False. For example, Whisper. cpp: Whisper. pip install -U openai-whisper. If you have other types of files. wav" --model medium --device cuda. To achieve good performance, you need an Nvidia CUDA GPU with > 8 GB VRAM. This may have performance implications. Just re-tried with v1. o and also -lcudart -lcublas. The ggml library is one of the first library for local LLM interference. 今回は以下のモデルを使用したいと思います.. sh: Livestream audio Mar 9, 2023 · Hey, thanks for sharing this here! I'm the author of faster-whisper and CTranslate2. Flash + language support (ref ggerganov#2) …. OpenAI released Whisper in September 2022 as Open Source. pythonの Jul 13, 2023 · doesn't work with arch=any OR arch=-arch=compute_61 for my GTX 1080 in Makefile make clean WHISPER_CUBLAS=1 make stream -j I whisper. I tested openai-whisper-cpp 1. Drop-in replacement for OpenAI running on consumer-grade hardware. 1. leuc on May 15, 2023. Nov 16, 2022 · The code above uses register_forward_pre_hook to move the decoder's input to the second GPU ("cuda:1") and register_forward_hook to put the results back to the first GPU ("cuda:0"). [00:00. g. 10. Author. The JAX code is compatible on CPU, GPU and TPU, and can be run standalone (see Pipeline Usage ) or as an inference endpoint (see Creating an Endpoint ). No GPU required. cpp repo; From the whisper. large 1550 M N/A large ~10 GB 1x. cpp 的模型转换工具及一些其他必要组件等。 Whisper模型下载及使用Whisper的安装方法: 命令行安装,可以使用 pip 直接安装、更新:(如果友友看不明白pip命令那么直接跳到Whi Mar 4, 2023 · Author. whisper是OpenAI公司出品的AI字幕神器,是目前最好的语音生成字幕工具之一,开源且支持本地部署,支持多种语言识别(英语识别准确率非常惊艳)。. It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision. The English-only models were trained on the task of speech Now build whisper. segmentation fault when use Core ML #919. anaconda安装无脑下一步就好 Jan 31, 2023 · この Whisper. cpp repo to download one of the models. Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently. 5x to 2x speedup compared to Openai’s Whisper model, but on my Intel i7-6700T only the tiny model sort of usable. CMake:cd whisper. We're excited to announce WhisperScript v1. cpp, 19 minutes audio transcribe, with Chinese Mandarin and English spoken. GPUを使いたい場合は、 . Dec 7, 2022 · OpenAIの高性能な音声認識モデルであるWhisperを、オフラインでかつGPUが無くても簡単に試せるようにしてくれたリポジトリを知ったのでご紹介。 Apr 11, 2023 · Windows11でPython版のWhisperを使いたかったけどPythonに触るのも久しぶりだったので色々調べながら。備忘録として残しておきます。 Jul 26, 2023 · 前一篇文章中 曾经介绍过 OpenAI 的开源模型 whisper,可以执行 99 种语言的语音识别和文字转写。 但是 whisper 模型占用计算资源多,命令行使用门槛高,所以还介绍了 whisper 模型的 c++ 实现:whisper. Install PaddleSpeech. Recently, Georgi Gerganov released a C++ port optimized for CPU and especially Apple Silicon Platform. on Apr 19, 2023. Dec 8, 2022 · With much less RAM and slower M1 (non-Pro) processors, you can see the numbers declining a lot. cpp package with the original sequential long-form transcription algorithm. We would like to show you a description here but the site won’t allow us. net 1. I want to test it with CUDA GPU for speed. f589894. make cleanmake -jcp bin/* . この実装は、openai / whisperよりも最大4倍高速で、同じ精度で、より少ないメモリを使用します。. self_int' is not currently supported on the MPS backend and will fall back to run on the CPU. dll files to the release, for added convenience. load_model("base", device="cuda") # If you are loading Whisper using Port of OpenAI's Whisper model in C/C++. ggml format. 8, PyTorch 2. Some updates: Dec 12, 2023 · You need to run WHISPER_CUBLAS=1 make stream. in the terminal, and create a new folder called data. CPU 推論はいろいろ CPU 専用命令使ってもそれなりにはかかります. cpp 项目是将 Whisper 移植到 C/C++ 中,而今天介绍的 Const-me/Whisper 项目则是 whisper. cpp that referenced this issue on Apr 27, 2023. Feb 2, 2024 · Whisper: The original Whisper model is implemented in Python and supports running on both the CPU and the GPU. en small ~2 GB ~6x. Switch between documentation themes. so). Each version of Whisper. cpp/models にあるREADMEにhuggingfaceのモデルを使用する場合の流れが書いてあるので,それに従います.. please use ffmpeg -i input. bobqianic added the solution This issue contains a potential We would like to show you a description here but the site won’t allow us. To be complete it was not working for me without others steps. Whisper object in your code to correctly match the expected constructor arguments. cpp folder, and run the following command: mkdir -p output; Apr 1, 2023 · In this video, I'll show you How to Use Whisper via GPU in Subtitle Edit. cpp を知った際は、うちの初代ラズパイBでも動作するかなと期待したのですが、それなりにメモリは食うようなので断念。 ただ、GPU がない環境でも動くことが確認できたので、これからも色々試してみたいと思います。 Nov 19, 2023 · Whisper. Sep 22, 2022 · After doing this I was able to run the following command ( choose whatever model fits your needs best ): whisper "audio. . However, the patch version is not tied to Whisper. cpp and tried it on my 6 year old desktop, CPU: AMD Ryzen 5 1600 Six-Core Processor, GPU: Radeon RX 460/560D, Video memory: 2048MB, gfx803, running Arch Linux based Manjaro. 7. CPP is faster than Whisper on GPU for the tiny (1. Overview. Jan 8, 2024 · Hi, I am a nixOS beginner for about a month. Apr 16, 2023 · is there a way to run whisper. 3 xtmu, gparvind, and LaptopDev reacted with thumbs up emoji. So you can let it do this, and it should work, if not, then prompt it in that language. For more control over the generation parameters, use the model + processor API directly: Ad-hoc generation arguments can be passed to model. Here are the steps for creating and using a quantized model: Port of OpenAI's Whisper model in C/C++. Next, you need to fetch a Whisper model converted into . 但我的设备是有GPU的,tensorflow可以正常调用GPU。 Mar 12, 2023 · Whisper 是一个由 OpenAI 训练并开源的神经网络,在英语语音识别方面的稳健性和准确性接近人类水平。 whisper. set_per_process_memory_fraction(), so it may not be actually reflecting what happens in e. Begin by cloning the dedicated Whisper. dk/ 👉 Subtitle Edit on GitHub whisper. 6. Apr 2, 2023 · whisper. cpp en tu propio ordenador para pasar de audio a texto. We should add these . This update adds a bunch of improvements to the visualization, playback, editing, and exporting of your transcripts. The latter is not absolutely necessary but added as a workaround because the decoding logic assumes the outputs are in the same device as the encoder. I can run the stream method with the tiny model, but the latency is too high. cpp on a Jetson Nano for a real-time speech recognition task. 940] And so my fellow Americans ask not what your country can do for you. cpp on gpu? Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Open your terminal again in the whisper. This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. A PC with a CUDA-capable dedicated GPU with at least 4GB of VRAM (but more VRAM is May 29, 2023 · AI字幕神器whisper最全中文攻略. cpp running on a MacBook Pro M1 (CPU only) Hope you find this project interesting and let me know if you have any questions about the implementation. en --device xpu tests/jfk. android: Android mobile application using whisper. cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud. 13. cpp into your transcription workflow effectively, here’s a stepwise guide: 1. bobqianic added the question label on Nov 18, 2023. Powered by OpenAI's Whisper. Apr 24, 2024 · Developers can now integrate ChatGPT and Whisper models into their apps and products through our API. I was a bit surprised by the benchmark results on this CPU. Nov 15, 2023 · Three weeks ago I discovered whisper. It's also possible for Core ML to run different portions of the model in different devices Jan 5, 2023 · With ffmpeg installed, you can now open your whisper. We're using an Nvidia GPU with CUDA support, so our Supports transformers, GPTQ, AWQ, EXL2, llama. Follow the steps in the whisper. cpp that referenced this issue on Nov 19, 2023. cpp 项目的模型文件下载及简单使用,后附 Whisper. モデルの用意. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info. cpp (GGUF), Llama models. metalをコピーしていたのでこれを指定している。 Closed. 2. cpp folder in Finder using open . 000 --> 00:07. ggerganov#1517 Redistribute CUDA DLLs. cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME Skip to content Whisper. / ``` Run all the examples as usual. cpp is compiled and ready to use. cpp (CPU) and openai-whisper (GPU), as well as code to visualize the benchmark results with matplotlib. Sep 21, 2022 · small 244 M small. The English-only models were trained on the task of speech recognition. Plus, code to transcribe YouTube videos using whisper. Whisper Sample Code Apr 9, 2024 · I ran into the same problem. Mar 27, 2024 · Then we simply call the Whisper model using our GPU instead of CPU: # If you are loading Whisper using GPU gpu_model = whisper. Install whisper Mar 30, 2023 · faster-whisper とは. Whisper is a Transformer based encoder-decoder model, also referred to as a sequence-to-sequence model. 04, Nvidia Jetpack 4. 1, an update to our Electron desktop Whisper implementation that introduces a lot of new features to speed up your transcription workflow. Copy paste your audio file (s) that you want to convert into the data folder. cpp make clean WHISPER_CLBLAST=1 make -j CMake: cd whisper. Feb 1, 2023 · Whisper. @ggerganov, is there a reason why it's not included in the code? I'm completely unaware of the technical details of this topic, but I'd like to use GPU by We also provide an end-to-end Google Colab that benchmarks Whisper against Distil-Whisper. 5. Category. Try to run it on that Veritasium video and see it for yourself. bobqianic added the question Further information is requested label Dec 15, 2023. May 11, 2023 · Download OpenAI's Whisper. You need to use the 16 kHz sampling rate wav file. cpp and server of llama. md files in Whisper. Oct 12, 2023 · I am trying to run whisper. 5 seconds for 30 seconds of real-time audio), this would only be useful if you was transcribing a large amount of audio (podcasts, movies, large amounts of audio files Apr 12, 2024 · Apr 12, 2024. Jul 14, 2023 · I can confirm that the issue of not running in GPU on macOS M1 is still present. 5B params). 0. wav. ← WavLM XLS-R →. Mar 13, 2024 · Table 1: Whisper models, parameter sizes, and languages available. 2) with a 2 sec mp3 audio and it took 12 sec to transcribe with --language en --model tiny parameters. 这篇文章应该是网上目前关于Windows系统部署whisper最全面的中文攻略。. After a good bit of research I found that the main-cuda. I use a modified nix file, and add cudaPackages libcublas cudatoolkit in buildInputs and cuda_nvcc in nativeBuildInputs, also add env = { WHISPER_CUBLAS = "1"; }. cpp's log output and sending it to the log backend. まずは openai-whisper. Mar 6, 2012 · In this Subtitle Edit Tutorial, we'll look at the different Whisper Modes available in Subtitle Edit. Dec 20, 2022 · For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. cpp with CLBlast support: Makefile: cd whisper. cpp : WASM example. cpp using make. Self-hosted, community-driven and local-first. anaconda:python环境管理工具 chocolatey:windows包管理工具. 「音声認識モデル Whisper の推論をほぼ倍速に高速化した話」を参考に fp16 化 + no_grad Oct 7, 2022 · Whisper CPP がありました! libtorch や ONNX など使わず, 自前の ggml で model の forward 計算と, 自前で C++ で FFT 処理 (MEL spectrogram 計算) + Token decode していてしゅごい. cpp: whisper. In this situation, as only exception to the description above, the value specified for '--gpu-architecture' may be a 'real' architecture (such as a sm_50), in which case nvcc uses the specified 'real' architecture and May 11, 2023 · En este vídeo veremos como instalar y usar whisper. Oct 19, 2022 · I tried Whisper on a Jetson tx2 development kit (ubuntu 18. Aug 1, 2023 · Now build whisper. It provides high-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model running on your local machine. Contribute to ggerganov/whisper. whisper. That is Whisper Open AI vs Whisper CPP vs Whisper Const May 10, 2024 · iOS mobile application using whisper. May 2, 2023 · ちなみにwhisper高速化兄弟のwhisper-jaxだとGPUをほぼに使ってくれるので、RTX3600環境で10分の音源を3分程度で書き起こししてくれました。 GPU環境があるのであれば、whisper-jaxが最速の可能性があります。 faster-whisperもあるので、速度比較してみたいです。 May 20, 2023 · whisper. Read README. libcuda. The reason why it went to “Welsh” on your accent is that it does autodetect language and output in that language. Implicitly enables hidden GPU flag at runtime. To integrate Whisper. tamo added a commit to tamo/whisper. cpp 项目,以及该项目的 GUI 版本 :whisperDesktop 。 May 5, 2023 · windows本地搭建openai whisper并开启NVIDIA GPU加速 需要的工具. When compiling stuff with CUDA support you need to distinguish between the compile phase and the runtime phase: When you build the image with docker build without mapping a graphics card into the container the build should link against the CUDA library stubs (e. cpp and llama. 👉 Official Subtitle Edit Website 👉 https://nikse. Acquire the Whisper Model. bin" model weights. Oct 1, 2022 · Port of OpenAI's Whisper model in C/C++. However, as the GPUs inference speed is so much faster than real-time anyways (around 0. This repository comes with "ggml-tiny. Model: ggml-large-v3, lower is better. Also, I've tried the solution from @diaojunxian and it works! Inference is much faster when it's run on GPU. The version of Whisper. Plain C/C++ implementation without any dependencies. RTX 3090 Advantage. 2 (from unstable and 23. This is Unity3d bindings for the whisper. cpp - Locally run an Instruction-Tuned Chat-Style LLM faster-whisper - Faster Whisper transcription with CTranslate2 Jun 21, 2023 · Purpose: These instructions cover the steps not explicitly set out on the main Whisper page, e. 2008. I tried the CuBLAS instructions, but I could not get it to work (maybe my bad or GPU incompatibility) I would appreciate it if you guys could give me a tip or some advice. cpp with CLBlast support: ```Makefile:cd whisper. M1 Max 24c GPU. cppmake cleanWHISPER_CLBLAST=1 make -j. huapingchen mentioned this issue on May 13, 2023. cpp repository to obtain the source code. Nov 22, 2022 · model = whisper. 👍 1. faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. swiftui: SwiftUI iOS / macOS application using whisper. Thanks! Jan 8, 2023 · 結論から言うと,whisper. Share Add a Comment Mar 4, 2023 · 最終的に, fp16 だと GPU メモリ 4. Thanks, this helped me too. generate , including num_beams for beam-search, return_timestamps for segment-level timestamps, and prompt_ids for Nov 2, 2022 · Hello there, In principle you should be able to apply TensorRT to the model and get a similar increase in performance for GPU deployment. for those who have never used python code/apps before and do not have the prerequisite software already installed. mp3 -ar 16000 -ac 1 -c:a pcm_s16le output. Nov 6, 2022 · Will ggml / whisper. Step 2 : 安裝whisper步驟. 5 GB くらいあれば動きます. load_model("small", device="cuda") 发现仍然无法运行,提示我 RuntimeError: Attempting to deserialize object on a CUDA device but torch. nvim: Speech-to-text plugin for Neovim: generate-karaoke. cpp 1. May 4, 2023 · If no value for option '--gpu-code' is specified, then the value of this option defaults to the value of '--gpu-architecture'. 1 is based on Whisper. ) I am pretty sure that the box I am connecting to via ssh has 20 CPU cores and 2 GPU cores however the GPUs are not recognized by Jax. cpp directory, run: We would like to show you a description here but the site won’t allow us. It is suitable for scenarios that require real Compared to OpenAI's PyTorch code, Whisper JAX runs over 70x faster, making it the fastest Whisper implementation available. To get whisper up and runnig with cuda I had to do the following steps. まずは本家 openai-whisper 使います. faster-whisperは、トランスフォーマーモデルの高速推論エンジンであるCTranslate2を使用したOpenAIのWhisperモデルの再実装です。. Not Found. モデルによっては,git-lftが必要になるかもしれません.. It is implemented in C/C++ and runs only on the CPU. Whisper and whisperX also splits it up internally, but has mechanism to fix the boundaries and so are much better. I was running the desktop version of Whisper using the CMD prompt interface successfully for a few days using the 4GB NVIDIA graphics card that came with my Dell, so I sprang for an AMD Radeon RX 6700 XT and had it installed yesterday, only to discover that Whisper didn't recognize it and was reverting my my CPU. …. It’s a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. 500. 2. It does indeed: that runs slower than CPU alone. Usage instructions: Load a ggml model file (you can obtain one from here, recommended: tiny or base) Select audio file to transcribe or record audio from the microphone (sample: jfk. wav) Click on the "Transcribe" button to start the transcription. This is more of a real world test with actual work loads to be handled. 進入whisper環境 Apr 20, 2023 · I will share benchmarks and test results. 開啟Anaconda Prompt. with model file in page cache. Faster examples with accelerated inference. $ . Oct 19, 2022 · Random (slightly adjusted) ChatGPT (GPT v4) advice that helped me. Sep 23, 2022 · daleh October 29, 2022, 8:00pm 16. 59 ms. The availability of advanced technology and tools, in particular, AI is increasing at an ever-rapid rate, I am going to see just how easy it is to create an AI-powered real-time The main goal of llama. This is the smallest and fastest version of whisper model, but it has worse quality comparing to other models. I was not able to use CUDA due to compatibility issues with pytorch 2. We’re on a journey to advance and democratize artificial intelligence through open source and open science. Minimal example running fully in the browser. 0, CUDA 10. Steps for getting started: Clone the Whisper. I’ve been testing Whisper. 輸入conda create — name whisper python=3. cpp. Buzz is better on the App Store. Complie Whisper. Link al articulo mi blog:https://construyendoachisp Port of OpenAI's Whisper model in C/C++. cpp ; mkdir build ; cd buildcmake -DWHISPER_CLBLAST=ON . jacob-salassi mentioned this issue on May 14, 2023. Dockerfile has some issues. Encoder processing can be accelerated on the CPU via OpenBLAS. CPP is always much faster than Whisper on CPU, over 6 times faster for the tiny model up to over 7 times faster for the large one. Closed. Requirements: Full admin rights on your computer. cpp run the live stream using following code: (GPU, step set to 700 works fine) I get "hallucinations" when not talking :robot: The free, Open Source OpenAI alternative. Requires calling Feb 7, 2024 · Jianningyuan commented on Feb 7. 0 is based on Whisper. wav -t 8. This major release includes the following changes: Full GPU processing of the Encoder and the Decoder with CUDA and Metal is now supported Apr 16, 2022 · WARNING - No GPU/TPU found, falling back to CPU. The efficiency can be further improved with 8-bit quantization on both CPU and GPU. 3× faster). Also, when running whisper, my GPU hovers around 40-50% utilization, while running whisperX pushes it up to >95% utilization. en. Make sure that the server of Whisper. Nov 17, 2023 · Yes. cpp, and there is roughly a 1. tamo mentioned this issue on Nov 19, 2023. 120+xpu and both model loading time and decoding have a "good" speed now. It has no dependencies, low memory usage, excellent performance, supports multiple technologies and platforms, supports mixed precision and integer quantization and other advantages. net is the same as the version of Whisper it is based on. net is tied to a specific version of Whisper. BLAS CPU support via OpenBLAS. 11), it works great with CPU. Apr 22, 2023 · Step 1 : 建議使用Anaconda安裝,請於下圖下載Ananconda. As of now (2023/05/01), here’s an example flow to do this: Clone the whisper. time whisper --language en --model tiny. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. cpp supports integer quantization of the Whisper ggml models. /main -m models/ggml-tiny. Nov 10, 2022 · Cuda is apparently unsupported on M1 macs, and if I try --device mps I get a warning that The operator 'aten::repeat_interleave. The existing CPU-only implementation achieves this goal - it is bloat-free and very simple. Runs gguf, trans Whisper is a Transformer based encoder-decoder model, also referred to as a sequence-to-sequence model. Want to try it yourself? Use my code to run your own tests. 8× faster) and base models (1. this will modify the gcc command so that it includes ggml-cuda. metal: enable Metal support. 0 and Whisper A Step-by-Step Guide to Whisper. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1. cpp does not treat OpenCL as a GPU, so it is always enabled at runtime. cpp cmake -B build -DWHISPER_CLBLAST=ON cmake --build build -j --config Release Run all the examples as usual. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster than Whisper large-v3, while performing to within 0. en medium ~5 GB ~2x. cpp is a lightweight intelligent speech recognition library, which is a port of the OpenAI Whisper model. flac. cpp repository: Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real-time transcription. whisper开源 Apr 24, 2023 · 这里提供 Whisper 及 Whisper. @acoloss Try the prompting techniques from myself and @RonaldGRuckus, this should fix the “Welsh” problem. 3. I a, able to run my raspberry pi 4 whisper. _ext. 3, python 3. cuda. Might help others as well, YMMV:""" To resolve this issue, you should modify the instantiation of the ctranslate2. openblas: enable OpenBLAS support. brew install whisper-cpp. Assuming you want to use your GPU, the device should likely be 'cuda', and you may need to adjust the compute_type and othe Mar 28, 2023 · in theory if you are succeed doing the Core ML models you can have full advantage of any number of CPU, GPU and RAM allocated on your device because Core ML supports all the compute units available in your device: CPU, GPU and Apple's Neural Engine (NE). The VRAM requirements are from simulations using torch. bin -f rhasspy/setkitchenlights20percent. . Load Time. cpp support CUDA / GPU? One of the main goals of this implementation is to be very minimalistic and be able to run it on a large spectrum of hardware. a GPU with exactly 5 GB VRAM. Apr 7, 2024 · セットアップ. sh: Helper script to easily generate a karaoke video of raw audio capture: livestream. ge up ax cv yu za mf wk an qe